Deploy
Overview
This guide provides step-by-step instructions for setting up an environment that integrates multiple services like AIMETA.
Prerequisites
Necessary API keys and environment variables as required by specific services.
Ensure NVIDIA GPU drivers are installed if utilizing GPU support for the LLM.
Service Configuration
LLM-GPU Services
Purpose: Hosts a Large Language Model with optional GPU support.
Configuration:
Operates within a Linux environment.
Utilizes all available NVIDIA GPUs for the GPU-enabled configuration.
Pull-Model Service
Purpose: Manages the model pulling for the LLM service.
Configuration:
Built from Linux environment.
Depends on environment variables such as model URLs and API keys.
Connected to the
net
network and dependent onllm-gpu
.
Database Service
Purpose: Utilizes Neo4j for graph-based data storage.
Configuration:
Exposes ports for external connectivity.
Uses persistent storage volumes.
Configured with environment variables for authentication and plugins.
Loader Service
Purpose: Loads data into the Neo4j database.
Configuration:
Custom Dockerfile.
Interacts with Neo4j and depends on the database and pull-model services.
Utilizes API keys and environment configurations.
Bot Services
General Bot Configuration:
Custom Dockerfile.
Utilizes
net
network and Neo4j database connections.
Voice Bot Specifics:
Handles tasks related to Voice processing.
API Service
Purpose: Provides an API interface for external communications.
Configuration:
Built from a custom Dockerfile.
Health checks implemented.
Ports exposed for external connectivity.
Depends on the database and pull-model service completion.
Health Checks
Implemented in services such as the API and chrome plugin to ensure operational status before being available for use.
Last updated