Troubleshooting¶
Sometimes, an installation or upgrade will not go as planned. This page details some ways in which to troubleshoot what's happened.
As a general troubleshooting rule, output from the following commands will be helpful:
./manage_server logsto view stack logs (add--followto stream)journalctl -xu docker.service(or container engine equivalent) if the engine itself is failing
Reset the Enterprise Server Installation¶
If, for any reason, you want to "reset" your entire Enterprise installation and completely start over, you'll need to do the following:
# Optional: Back up your existing data
./manage_server backup
# Stop the Enterprise server
./manage_server stop
# Remove the Docker volumes
# WARNING: THIS WILL IRREVERSIBLY DELETE ALL OF YOUR DATA
./manage_server delete
# Re-install the Enterprise server
./manage_server install
# Optional: Start the server and restore from backup
./manage_server start --detach
./manage_server restore
Unable to Start or Stop the Enterprise Server¶
There are a few common causes, detailed below.
Running as Root¶
If deploying with the bundled database container, do not run as root. The upstream PostgreSQL container will fail to start, and the backend health check will report database:5432 - no response. Use a non-root user or an external PostgreSQL service.
Swarm vs Compose mismatch¶
If you installed with Swarm, config.env contains the stack name. Commands will default to Swarm automatically. Use --swarm[=<STACK_NAME>] only if you need to override the saved stack name. Mismatching the mode can lead to “nothing to stop/start” or orphaned services.
Issues With Overrides¶
If you see the error version mismatched between two composefiles, remove the version: line from your override file. This has been deprecated for awhile (2021), but has only recently (2025) begun causing warnings and errors on newer versions of Docker.
Additionally, if you've made any modifications to the service definition file in the override file, you might want to check the output of a Docker command like docker stack ps binaryninja_enterprise --no-trunc to see if the containers are failing to start for any reason that's been identified by Docker itself.
Missing Files or Incorrect File Permissions¶
If you see messages like Error: required secret 'db_password' not found in /run/secrets or /secrets or Warning: failed to copy secret (and/or tracebacks with Permission denied, Read-only file system, or No such file or directory), the most likely cause of this is incorrect file permissions on secrets.
This can happen on any host platform, but most commonly occurs on Red Hat systems using Podman for deployment because of SELinux labeling. On RHEL, files inside of the container must have the label type container_file_t (or var_run_t, if they're mounted in /run). Files outside of the container, on the host file system, will likely have something very different (user_home_t if in /home, user_tmp_t if in /tmp, and so on). When podman links in volumes, it's supposed to re-label these files. Sometimes, this doesn't happen as expected.
The best troubleshooting steps for this are to:
- Ensure the file exists at the expected path on the host
- Ensure the user launching the Enterprise server owns this file and can read it
- Ensure the file exists at the expected path inside the container
- Ensure the
rootuser inside the container can read this file - If using SELinux, also ensure (using something like
ls -Z) that the appropriate labels are set in both places
On RHEL, you can also run something like the following to look for audit logs corresponding to file accesses from the containers that have been blocked by SELinux:
sudo sealert -a /var/log/audit/audit.log | sed -n '1,120p'
Check the docker-compose.yml to see what volumes and secrets are expected in a standard deployment. (There are many.)
Network Errors When Starting the Enterprise Server¶
If you are seeing a large number of errors in the log when starting the Enterprise server that say things like "Host is unreachable", this generally points to a host networking issue.
If you are on Red Hat, nftables is the likely culprit as it is known to interfere with docker-compose routes and prevent inter-container networking. The easiest way to fix this is to edit the file /etc/firewalld/firewalld.conf and change the line that says FirewallBackend=nftables to FirewallBackend=iptables. This will switch your system to using iptables as your default firewall instead of nftables. After you do a sudo systemctl restart firewalld.service, the server should be back to working correctly.
Other causes for this problem are more generally related to firewall rules on the local system and/or a misconfiguration of your Docker networking.
TLS/Certificate Problems¶
If using a custom CA (SSO, proxy, webhook, or registry), ensure the CA is mounted into backend and update-ca-certificates runs on start (see SSO custom CA guidance).
If you have another proxy in front, set ENTERPRISE_PROXY_NO_TLS=true (or --no-tls) and ensure the external proxy presents a cert trusted by clients. Clients will reject untrusted CAs.
Single Sign-On (SSO) Doesn't Work¶
Please see the troubleshooting section listed under your chosen SSO source on this page.
Changing the Volume Location Won't Work¶
If you are trying to change the volume location of an existing install, Docker may keep using the old named volumes. Use docker volume ls / docker volume inspect to identify them and remove them with docker volume rm <volume name> so the new bind mounts or volume names take effect. Back up before removing volumes; deleting a volume permanently deletes its data.
Cannot Download Client Executables¶
If, when you download the client executables, you get a 200 response and a 0 byte file, your underlying storage likely has a sector size larger than 4096 bytes. A way to confirm this is to look at the proxy container's logs for an error message that state failed (22: Invalid argument) while sending response to client.
To fix this, you will either need to migrate your server to a storage volume that has a smaller sector size or contact support and we'll help troubleshoot further. This will require changes to the proxy container
Client SSO and Chat Not Working¶
If the Enterprise Server is behind a proxy, ensure that websocket traffic is being forwarded. To do this in Nginx, for example, set the following configuration variables:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
Database Is Locked¶
Binary Ninja currently uses SQLite for its analysis databases (.bndb), projects (.bnpr), and type archives (.bnta). This means it is only able to have a single instance of these open at any time.
The message Error while saving database snapshot: database is locked means multiple copies are open at the same time. All instances will need to be closed, and a new one opened to allow saving again (this includes syncing to an Enterprise server).
Backups and Restores¶
- Restores from newer backups to older servers are blocked. Update the server first, then restore.
- Backup format version 0 (v1.0.43 or earlier) is not supported in v2.0+. Contact support if you must recover an old backup.
- Always restart the stack after a restore to ensure migrations finish cleanly.
Broken Pipe During Restore¶
If the bundled object store (SeaweedFS) runs out of writable volumes, uploads and restores can fail even though the host still has free disk space. Common symptoms include:
- Restore failures that mention
UploadPartorInternalErrorfrom the object store. object-storelogs that includeNo writable volumesorfailed to find writable volumes.- Large restores aborting with
Broken pipeafter the object store returns 500 errors.
The most common underlying cause is network latency or instability. This can be especially true if both the backup and the service are on different remote hosts. Try moving the backup closer to the service, if possible, to minimize possible network failures.
The second most common underlying cause is the object store running out of space due to the disk filling up or its limits being reached. Fixing the former will require upgrading storage or removing data. Fixing the latter requires stopping the server, increasing the value of ENTERPRISE_OBJECT_STORE_MAX_VOLUMES, and starting it again:
./manage_server stop
# Update config.env with larger values than the following defaults:
# ENTERPRISE_OBJECT_STORE_MAX_VOLUMES=32
./manage_server start
ENTERPRISE_OBJECT_STORE_VOLUME_SIZE controls the per-volume size limit (in MB, maximum of 30000) and ENTERPRISE_OBJECT_STORE_MAX_VOLUMES controls how many volumes the bundled SeaweedFS server can allocate. These settings only affect the bundled object store. If you use an external S3-compatible service, consult that provider's storage limits instead.