-
-
Notifications
You must be signed in to change notification settings - Fork 49
Developer Documentation
Complete guide for developers contributing to, building, and extending Cortex Linux.
- System Architecture Overview
- API Reference
- Contributing Guidelines
- Building from Source
- Package Building
- Kernel Modifications
- Security Model
- Testing
- Debugging
Cortex Linux is built on a layered architecture that integrates AI capabilities directly into the operating system.
┌─────────────────────────────────────┐
│ Application Layer │
│ (User applications, CLI tools) │
└─────────────────────────────────────┘
│
┌─────────────────────────────────────┐
│ Service Layer │
│ (systemd services, HTTP API) │
└─────────────────────────────────────┘
│
┌─────────────────────────────────────┐
│ AI Layer │
│ (Sapiens 0.27B engine) │
└─────────────────────────────────────┘
│
┌─────────────────────────────────────┐
│ Kernel Layer │
│ (Linux kernel + AI enhancements) │
└─────────────────────────────────────┘
- Base: Linux kernel 6.1+
-
Enhancements:
- AI-aware process scheduling
- Resource management for AI workloads
- Enhanced memory management
- Real-time capabilities
Location: /usr/src/linux-cortex/
- Engine: Sapiens 0.27B reasoning model
- Runtime: Custom inference engine (C++)
- Memory Management: Efficient model loading and caching
- API: C API for system integration
Location: /usr/lib/cortex-ai/
- HTTP API Server: RESTful API on port 8080
-
CLI Tool:
cortex-aicommand-line interface - Systemd Services: Background AI services
- Configuration: YAML-based configuration
Location: /usr/bin/cortex-ai, /etc/cortex-ai/
- Standard Linux Userland: Core utilities, package manager
- Development Tools: Compilers, debuggers, build tools
- Package Management: APT-based (Debian/Ubuntu compatible)
The core AI engine exposes a C API for system-level integration.
#ifndef CORTEX_AI_H
#define CORTEX_AI_H
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
// Error codes
typedef enum {
CORTEX_SUCCESS = 0,
CORTEX_ERROR_INVALID_PARAM = -1,
CORTEX_ERROR_MODEL_NOT_LOADED = -2,
CORTEX_ERROR_OUT_OF_MEMORY = -3,
CORTEX_ERROR_TIMEOUT = -4
} cortex_error_t;
// Handle for AI engine instance
typedef void* cortex_handle_t;
// Initialize AI engine
cortex_error_t cortex_init(cortex_handle_t* handle, const char* model_path);
// Cleanup
void cortex_cleanup(cortex_handle_t handle);
// Perform reasoning
cortex_error_t cortex_reason(
cortex_handle_t handle,
const char* query,
char* output,
size_t output_size,
uint32_t* tokens_used
);
// Get model information
cortex_error_t cortex_get_model_info(
cortex_handle_t handle,
char* name,
size_t name_size,
uint32_t* param_count
);
#ifdef __cplusplus
}
#endif
#endif // CORTEX_AI_H#include <cortex_ai.h>
#include <stdio.h>
#include <stdlib.h>
int main() {
cortex_handle_t handle;
cortex_error_t err;
char output[1024];
uint32_t tokens;
// Initialize
err = cortex_init(&handle, "/usr/lib/cortex-ai/models/sapiens-0.27b");
if (err != CORTEX_SUCCESS) {
fprintf(stderr, "Failed to initialize: %d\n", err);
return 1;
}
// Perform reasoning
err = cortex_reason(handle, "What is 2+2?", output, sizeof(output), &tokens);
if (err == CORTEX_SUCCESS) {
printf("Result: %s\n", output);
printf("Tokens used: %u\n", tokens);
}
// Cleanup
cortex_cleanup(handle);
return 0;
}See AI Integration Guide - Python Integration for Python API documentation.
See AI Integration Guide - HTTP API for HTTP API documentation.
- Be respectful and inclusive
- Welcome newcomers and help them learn
- Focus on constructive feedback
- Respect different viewpoints
-
Fork the Repository
git clone https://github.com/cortexlinux/cortex.git cd cortex -
Create a Branch
git checkout -b feature/your-feature-name # or git checkout -b fix/your-bug-fix -
Make Changes
- Follow coding standards (see below)
- Write tests for new features
- Update documentation
-
Test Your Changes
make test # or pytest tests/
-
Commit Changes
git add . git commit -m "feat: add new feature description"
Commit message format:
-
feat:New feature -
fix:Bug fix -
docs:Documentation changes -
test:Test additions/changes -
refactor:Code refactoring -
perf:Performance improvements
-
-
Push and Create Pull Request
git push origin feature/your-feature-name
// Use 4 spaces for indentation
// Maximum line length: 100 characters
// Function names: snake_case
// Constants: UPPER_SNAKE_CASE
// Example
int cortex_process_query(const char* query, cortex_result_t* result) {
if (query == NULL || result == NULL) {
return CORTEX_ERROR_INVALID_PARAM;
}
// Implementation
return CORTEX_SUCCESS;
}# Follow PEP 8
# Use 4 spaces for indentation
# Maximum line length: 100 characters
# Function names: snake_case
# Class names: PascalCase
# Example
def process_query(query: str, context: Optional[Dict] = None) -> CortexResult:
"""Process a query through the AI engine.
Args:
query: The query string to process
context: Optional context dictionary
Returns:
CortexResult object with the response
"""
if not query:
raise ValueError("Query cannot be empty")
# Implementation
return result- All public APIs must have documentation
- Use docstrings for Python functions
- Use Doxygen-style comments for C/C++
- Include examples in documentation
- Unit Tests: Required for all new functions
- Integration Tests: Required for new features
- Coverage: Aim for 80%+ code coverage
- CI/CD: All tests must pass before merge
# Run tests
make test
# Run with coverage
make test-coverage
# Run specific test suite
pytest tests/test_ai_engine.py# Install build dependencies
sudo apt update
sudo apt install -y \
build-essential \
cmake \
ninja-build \
git \
python3 \
python3-pip \
libssl-dev \
libcurl4-openssl-dev \
pkg-configgit clone https://github.com/cortexlinux/cortex.git
cd cortex
git submodule update --init --recursivecd cortex-ai-engine
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
sudo make installcd cortex-cli
cargo build --release
sudo cp target/release/cortex-ai /usr/local/bin/cd cortex-api-server
go build -o cortex-api-server
sudo cp cortex-api-server /usr/local/bin/# From repository root
./build.sh --all
# Or build specific components
./build.sh --engine
./build.sh --cli
./build.sh --api# Python dependencies
pip install -r requirements-dev.txt
# Rust toolchain (for CLI)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Go toolchain (for API server)
# Download from https://go.dev/dl/# Install packaging tools
sudo apt install -y debhelper dh-make devscripts
# Create package structure
cd cortex
debuild -us -uc
# Result: ../cortex-ai_1.0.0_amd64.deb# Install packaging tools
sudo yum install -y rpm-build rpmdevtools
# Setup build environment
rpmdev-setuptree
# Create spec file (see packaging/cortex-ai.spec)
rpmbuild -ba packaging/cortex-ai.spec
# Result: ~/rpmbuild/RPMS/x86_64/cortex-ai-1.0.0-1.x86_64.rpm# Build all packages
./scripts/build-packages.sh
# Packages created in dist/ directory:
# - cortex-ai_1.0.0_amd64.deb
# - cortex-ai-1.0.0-1.x86_64.rpm
# - cortex-ai-1.0.0.tar.gz# Kernel source
/usr/src/linux-cortex/
# Kernel configuration
/usr/src/linux-cortex/.configcd /usr/src/linux-cortex
# Configure kernel
make menuconfig
# or
make xconfig
# Build kernel
make -j$(nproc)
# Build modules
make modules
# Install
sudo make modules_install
sudo make installCortex-specific kernel modules:
-
cortex_scheduler.ko: AI-aware process scheduling -
cortex_memory.ko: Enhanced memory management for AI workloads -
cortex_monitor.ko: System monitoring and metrics
// cortex_scheduler.c
#include <linux/module.h>
#include <linux/kernel.h>
static int __init cortex_scheduler_init(void) {
printk(KERN_INFO "Cortex scheduler module loaded\n");
// Initialization code
return 0;
}
static void __exit cortex_scheduler_exit(void) {
printk(KERN_INFO "Cortex scheduler module unloaded\n");
// Cleanup code
}
module_init(cortex_scheduler_init);
module_exit(cortex_scheduler_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Cortex Linux Team");
MODULE_DESCRIPTION("AI-aware process scheduler");# Makefile
obj-m += cortex_scheduler.o
KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
all:
$(MAKE) -C $(KDIR) M=$(PWD) modules
clean:
$(MAKE) -C $(KDIR) M=$(PWD) clean# Build module
make
# Load module
sudo insmod cortex_scheduler.ko
# Check module
lsmod | grep cortex
# Unload module
sudo rmmod cortex_scheduler- AI engine runs in isolated process with limited privileges
- Systemd service runs as dedicated user:
cortex-ai - No root access required for AI operations
# AI engine files
/usr/lib/cortex-ai/ # Read-only for users
/etc/cortex-ai/ # Config: root-owned, 644
/var/log/cortex-ai/ # Logs: cortex-ai user, 640
/var/cache/cortex-ai/ # Cache: cortex-ai user, 750- HTTP API binds to localhost by default
- Optional API key authentication
- Rate limiting to prevent abuse
- CORS configuration for web applications
All packages are signed with GPG:
# Verify package signature
dpkg-sig --verify cortex-ai_1.0.0_amd64.deb
# Import signing key
gpg --import cortex-linux-signing-key.asc- Never run AI engine as root
- Use API keys in production
- Restrict network access to API
- Regular security updates
- Monitor logs for suspicious activity
# Check security status
cortex-ai security-check
# Review security configuration
cat /etc/cortex-ai/security.yaml# Run all unit tests
make test
# Run specific test suite
cd cortex-ai-engine
./tests/run_tests.sh
# Run with verbose output
make test VERBOSE=1# Test HTTP API
cd cortex-api-server
go test ./...
# Test CLI
cd cortex-cli
cargo test
# End-to-end tests
./tests/integration/test_e2e.sh# Benchmark AI engine
./benchmarks/benchmark_engine.sh
# Load testing
./tests/load/load_test.sh --requests 1000 --concurrent 10# Generate coverage report
make test-coverage
# View report
open coverage/index.html# Enable debug logging
export CORTEX_LOG_LEVEL=DEBUG
cortex-ai reason "Test query"
# Use gdb
gdb --args cortex-ai reason "Test query"
(gdb) break cortex_reason
(gdb) run# Enable debug mode
cortex-api-server --debug --log-level debug
# Use Delve (Go debugger)
dlv debug ./cortex-api-server
(dlv) break main.main
(dlv) continue# Enable kernel debugging
echo 8 > /proc/sys/kernel/printk
# View kernel messages
dmesg | tail -100
# Use kgdb for remote debugging
# See kernel documentation# Profile AI engine
perf record -g cortex-ai reason "Test query"
perf report
# Memory profiling (Valgrind)
valgrind --leak-check=full cortex-ai reason "Test query"VS Code:
{
"C_Cpp.default.includePath": [
"/usr/include/cortex-ai"
],
"python.linting.enabled": true,
"python.linting.pylintEnabled": true
}CLion: Import CMake project from cortex-ai-engine/
# Format C code (clang-format)
clang-format -i src/**/*.c src/**/*.h
# Format Python code (black)
black cortex-python-sdk/
# Format Rust code
cargo fmt# Lint C code
cppcheck src/
# Lint Python code
pylint cortex-python-sdk/
# Lint Rust code
cargo clippyFollow Semantic Versioning:
- MAJOR: Incompatible API changes
- MINOR: New features (backward compatible)
- PATCH: Bug fixes (backward compatible)
- Update version numbers
- Update CHANGELOG.md
- Run full test suite
- Build packages
- Sign packages
- Create GitHub release
- Publish packages to repository
# Release script
./scripts/release.sh 1.0.1- Documentation: This wiki
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Chat: Discord
Last updated: 2024