ZincSearch Lightweight Elasticsearch Alternative
ZincSearch is a lightweight, single-binary search engine written in Go that provides an Elasticsearch-compatible API, making it easy to drop in as a replacement for log search and application search use cases without Elasticsearch's resource demands. This guide covers installing ZincSearch on Linux, managing indexes, ingesting data, searching, and integrating with Grafana.
Prerequisites
- Ubuntu 20.04+ / Debian 11+ or CentOS 8+ / Rocky Linux 8+
- 512 MB RAM minimum (ZincSearch is very memory-efficient)
- Disk space for the index data
- Root or sudo access
Installing ZincSearch
# Download the latest release
ZINC_VERSION="0.4.10"
wget https://github.com/zincsearch/zincsearch/releases/download/v${ZINC_VERSION}/zincsearch_${ZINC_VERSION}_Linux_x86_64.tar.gz
tar xzf zincsearch_${ZINC_VERSION}_Linux_x86_64.tar.gz
sudo mv zincsearch /usr/local/bin/
zincsearch version
# Create data directory and user
sudo useradd -r -s /sbin/nologin zinc
sudo mkdir -p /var/lib/zincsearch
sudo chown zinc:zinc /var/lib/zincsearch
# Quick start test (Ctrl+C to stop)
ZINC_DATA_PATH=/tmp/zinc \
ZINC_FIRST_ADMIN_USER=admin \
ZINC_FIRST_ADMIN_PASSWORD=adminpassword \
zincsearch
# Access the UI at http://localhost:4080
Running ZincSearch as a Service
# Create environment file
sudo cat > /etc/zincsearch.env << 'EOF'
ZINC_DATA_PATH=/var/lib/zincsearch
ZINC_FIRST_ADMIN_USER=admin
ZINC_FIRST_ADMIN_PASSWORD=StrongPassword123!
ZINC_SERVER_ADDRESS=0.0.0.0:4080
ZINC_MAX_RESULTS=10000
GIN_MODE=release
EOF
sudo chmod 600 /etc/zincsearch.env
# Create systemd service
sudo cat > /etc/systemd/system/zincsearch.service << 'EOF'
[Unit]
Description=ZincSearch
After=network.target
[Service]
User=zinc
Group=zinc
EnvironmentFile=/etc/zincsearch.env
ExecStart=/usr/local/bin/zincsearch
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable --now zincsearch
sudo systemctl status zincsearch
# Test
curl -u admin:StrongPassword123! http://localhost:4080/healthz
Expose ZincSearch behind Nginx with TLS for production:
server {
listen 443 ssl;
server_name search.example.com;
ssl_certificate /etc/letsencrypt/live/search.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/search.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:4080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Index Management
ZincSearch auto-creates indexes when you index a document, but you can create them explicitly with settings:
# Create an index with explicit mapping
curl -u admin:StrongPassword123! \
-X PUT http://localhost:4080/api/index \
-H 'Content-Type: application/json' \
-d '{
"name": "logs",
"storage_type": "disk",
"mappings": {
"properties": {
"@timestamp": {"type": "date", "format": "2006-01-02T15:04:05Z07:00", "index": true},
"level": {"type": "keyword", "index": true, "store": true},
"message": {"type": "text", "index": true, "store": true, "analyzer": "standard"},
"service": {"type": "keyword", "index": true, "store": true},
"host": {"type": "keyword", "index": true, "store": true},
"duration_ms":{"type": "numeric", "index": true, "store": true}
}
},
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0
}
}'
# List all indexes
curl -u admin:StrongPassword123! http://localhost:4080/api/index | python3 -m json.tool
# Get index details
curl -u admin:StrongPassword123! http://localhost:4080/api/index/logs
# Delete an index
curl -u admin:StrongPassword123! \
-X DELETE http://localhost:4080/api/index/logs
ZincSearch supports these field types: text, keyword, numeric, bool, date, ip.
Indexing Documents
# Index a single document
curl -u admin:StrongPassword123! \
-X POST http://localhost:4080/api/logs/_doc \
-H 'Content-Type: application/json' \
-d '{
"@timestamp": "2024-01-15T12:00:00Z",
"level": "ERROR",
"message": "Database connection timeout after 30s",
"service": "api-gateway",
"host": "web-01",
"duration_ms": 30000
}'
# Bulk index documents (NDJSON format)
curl -u admin:StrongPassword123! \
-X POST http://localhost:4080/api/_bulk \
-H 'Content-Type: application/json' \
-d '{"index": {"_index": "logs"}}
{"@timestamp": "2024-01-15T12:01:00Z", "level": "INFO", "message": "Request processed", "service": "api-gateway", "host": "web-01", "duration_ms": 45}
{"index": {"_index": "logs"}}
{"@timestamp": "2024-01-15T12:01:05Z", "level": "WARN", "message": "High memory usage detected", "service": "worker", "host": "worker-01", "duration_ms": 0}
{"index": {"_index": "logs"}}
{"@timestamp": "2024-01-15T12:01:10Z", "level": "ERROR", "message": "Payment service unreachable", "service": "checkout", "host": "web-02", "duration_ms": 5000}
'
# Bulk index from file
cat > logs.ndjson << 'NDJSON'
{"index": {"_index": "logs"}}
{"@timestamp": "2024-01-15T12:02:00Z", "level": "INFO", "message": "Health check passed", "service": "monitor"}
NDJSON
curl -u admin:StrongPassword123! \
-X POST http://localhost:4080/api/_bulk \
-H 'Content-Type: application/json' \
--data-binary @logs.ndjson
Searching
ZincSearch uses an Elasticsearch-compatible query DSL:
# Full-text search
curl -u admin:StrongPassword123! \
-X POST "http://localhost:4080/api/logs/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"match": {
"message": "connection timeout"
}
},
"sort": [{"@timestamp": "desc"}],
"size": 20,
"from": 0
}'
# Boolean query with filters
curl -u admin:StrongPassword123! \
-X POST "http://localhost:4080/api/logs/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {
"bool": {
"must": [
{"match": {"message": "error"}}
],
"filter": [
{"term": {"level": "ERROR"}},
{"range": {
"@timestamp": {
"gte": "2024-01-15T00:00:00Z",
"lte": "2024-01-15T23:59:59Z"
}
}}
]
}
},
"size": 50,
"sort": [{"@timestamp": {"order": "desc"}}]
}'
# Aggregations (statistics)
curl -u admin:StrongPassword123! \
-X POST "http://localhost:4080/api/logs/_search" \
-H 'Content-Type: application/json' \
-d '{
"query": {"match_all": {}},
"aggs": {
"by_level": {
"terms": {"field": "level", "size": 10}
},
"by_service": {
"terms": {"field": "service", "size": 10}
},
"avg_duration": {
"avg": {"field": "duration_ms"}
}
},
"size": 0
}'
# Multi-index search
curl -u admin:StrongPassword123! \
-X POST "http://localhost:4080/api/logs,metrics/_search" \
-H 'Content-Type: application/json' \
-d '{"query": {"match_all": {}}, "size": 10}'
Elasticsearch-Compatible API
ZincSearch implements a subset of the Elasticsearch 7.x API. You can use Elasticsearch clients by pointing them at ZincSearch:
# Python elasticsearch client pointing at ZincSearch
from elasticsearch import Elasticsearch
es = Elasticsearch(
"http://localhost:4080",
http_auth=("admin", "StrongPassword123!"),
verify_certs=False
)
# Index a document
es.index(index="logs", body={
"@timestamp": "2024-01-15T12:00:00Z",
"level": "INFO",
"message": "Application started"
})
# Search
result = es.search(index="logs", body={
"query": {"match": {"message": "started"}},
"size": 10
})
print(result["hits"]["hits"])
// Node.js elasticsearch client
const { Client } = require("@elastic/elasticsearch");
const client = new Client({
node: "http://localhost:4080",
auth: { username: "admin", password: "StrongPassword123!" }
});
await client.index({
index: "logs",
body: { "@timestamp": new Date(), level: "INFO", message: "Test" }
});
const { hits } = await client.search({
index: "logs",
query: { match: { message: "test" } }
});
Supported Elasticsearch APIs:
_doc,_bulk,_search,_count_mapping,_settings- Index creation and deletion
- Basic query DSL (match, term, range, bool, match_all)
- Basic aggregations (terms, range, sum, min, max, avg)
Not supported: complex aggregations, percolate, ML features.
Grafana Integration
Configure Grafana to use ZincSearch as an Elasticsearch datasource:
- In Grafana, go to Configuration → Data Sources → Add data source
- Select Elasticsearch
- Configure:
- URL:
http://localhost:4080 - Access: Server
- Basic Auth: enabled, username
admin, passwordStrongPassword123! - Index name:
logs - Time field name:
@timestamp - Elasticsearch version:
7.0+
- URL:
- Click Save & Test
Create a log dashboard:
- Add a Logs panel with the ZincSearch datasource
- Add a Bar gauge panel for log level distribution using a terms aggregation
- Set up a time series for error rate over time
Troubleshooting
ZincSearch won't start:
sudo journalctl -u zincsearch -f
# Check for port conflicts: ss -tlnp | grep 4080
# Verify data directory permissions: ls -la /var/lib/zincsearch
Login fails:
# The ZINC_FIRST_ADMIN_USER/PASSWORD only set on first start
# To reset password, use the admin API
curl -u admin:currentpassword \
-X PUT http://localhost:4080/api/user \
-H 'Content-Type: application/json' \
-d '{"_id": "admin", "name": "Admin", "password": "NewPassword123!", "role": "admin"}'
Slow search performance:
# Check index stats
curl -u admin:StrongPassword123! http://localhost:4080/api/index/logs/stats
# For large indexes, ensure fields are marked as indexed in mappings
# Use keyword type for exact-match filters, text for full-text search
Bulk indexing errors:
# Validate NDJSON format - each pair must be on separate lines
# The _bulk endpoint returns per-item status
curl ... | python3 -m json.tool | grep '"result"\|"error"'
High memory usage:
# ZincSearch loads hot data into memory
# Reduce ZINC_MAX_RESULTS if memory is constrained
# Limit the time range of queries
Conclusion
ZincSearch delivers an Elasticsearch-compatible search API in a single binary that uses a fraction of the resources of a full Elasticsearch cluster, making it ideal for log analytics and application search on VPS servers where resources are limited. The compatible API means you can use existing Elasticsearch clients, Grafana dashboards, and Fluentd/Fluent Bit outputs without code changes. For production, run it behind Nginx with TLS, and keep the admin credentials secure while creating separate users for application access.


