Local Installation

Use the instructions below if you want to deploy a Handl solution in your own closed IT loop.
Hardware Requirements
Configuration scenarios:
1. Minimum - no more than 1 document per minute
Processor: 6 cores, 4.1 GHz, AVX 2 expansion; benchmark: Intel Core i5-10600KF
RAM: 32 Gb
Mass storage: 500 Gb SSD
2. Standard - up to 60 documents per minute
Graphics Card: Nvidia Tesla T4
Processor: 8 cores, 3.9 GHz, AVX 2 expansion; Benchmark: Intel Xeon W-2245
RAM: 64GB
Disk Drive: 1 Tb NVMe
3. Corporate - up to 600 documents per minute
Graphics Card
Nvidia Tesla T4 x2
Processor: 8 cores, 3.9 GHz, AVX 2 expansion; benchmark: Intel Xeon W-2245 x2
RAM: 128GB
Storage: 1 Tb NVMe x2
The above are the hardware requirements for productive operation. Handl also runs on weak configurations, such as a Core i5-8250U 1.6 GHz / 8 Gb RAM / 250 Gb SSD laptop. However, performance on such weak configurations is not guaranteed
Environment requirements:
Ubuntu operating system version 18.04
Docker container management system
docker-compose
Nvidia drivers with the latest version available:
nvidia-docker
Nvidia drivers with the latest version available
CUDA version 11.1 or later
Internet access for license conformation:
https://license.ml.handl.ai/check/v2
Dynamic IP
Port: 443
Protocol: TCP
Request: POST
Configuration files
I. docker-compose.yml
Create a docker-compose.yml file depending on the specification of your server.
Copy the following settings:
version: "3"
services:
queue:
image: rabbitmq:3.7-management-alpine
restart: always
logging: &short_logging
driver: "json-file"
options:
max-file: "10"
max-size: "100m"
redis:
image: redis:6-alpine
restart: always
logging: *short_logging
mongo:
image: mongo:3.6-stretch
restart: always
logging:
driver: none
worker: &service
image: registry.handl.ai/public/worker:v3.6.12
restart: always
env_file: &env .env
command: ""
depends_on:
- queue
- mongo
- redis
logging: *short_logging
volumes:
- ./errlogs:/logs:rw
front:
image: registry.handl.ai/public/docr-demo-nginx:v3.6.12
volumes:
- ./nginx.conf:/etc/nginx/conf.d/nginx.conf
env_file: *env
ports:
- ${API_PORT:-8080}:80
depends_on:
- api
api:
<<: *service
image: registry.handl.ai/public/api:v3.6.12
classifier:
<<: *service
image: registry.handl.ai/public/classifier:v3.6.12
multidocnet:
<<: *service
image: registry.handl.ai/public/multidocnet:v3.6.12
heuristics:
<<: *service
image: registry.handl.ai/public/heuristics:v3.6.12
wordnet:
<<: *service
image: registry.handl.ai/public/wordnet:v3.6.12
ocr:
<<: *service
image: registry.handl.ai/public/ocr:v3.6.12
fieldnet:
<<: *service
image: registry.handl.ai/public/fieldnet:v3.6.12
checkbox_segm:
<<: *service
image: registry.handl.ai/public/checkbox-segm:v3.6.12
table_handler:
<<: *service
image: registry.handl.ai/public/table-handler:v3.6.12
face:
<<: *service
image: registry.handl.ai/public/face:v3.6.12
II. env и nginx.conf
Create file .env:
Copy the following settings:
VERSION=v3.6.10
# front
TRY_ENDPOINT=/try
DEFAULT_NORMALIZATION_FIAS=False
# Legacy - не работает, но необходимо для запуска
DADATA_TOKEN=
DADATA_SECRET=
AUTOCODE_REPORT=
AUTOCODE_TOKEN=
Download the nginx.conf file:
Put the three files docker-compose.yml, .env, and nginx.conf in the same directory.
Running Handl
Navigate to the directory containing the three configuration files. Type the following command at the command line:
docker-compose up -d --force-recreate
You are all set! You can now use the local version of the web demo by opening the http://localhost:8080 link in your browser. You can also access the server using the API.
Last updated
Was this helpful?