Skip to content

Developer Documentation

Mike Morgan edited this page Jan 11, 2026 · 1 revision

Developer Documentation

Complete guide for developers contributing to, building, and extending Cortex Linux.


Table of Contents

  1. System Architecture Overview
  2. API Reference
  3. Contributing Guidelines
  4. Building from Source
  5. Package Building
  6. Kernel Modifications
  7. Security Model
  8. Testing
  9. Debugging

System Architecture Overview

Cortex Linux is built on a layered architecture that integrates AI capabilities directly into the operating system.

Architecture Layers

┌─────────────────────────────────────┐
│     Application Layer                │
│  (User applications, CLI tools)     │
└─────────────────────────────────────┘
              │
┌─────────────────────────────────────┐
│     Service Layer                   │
│  (systemd services, HTTP API)      │
└─────────────────────────────────────┘
              │
┌─────────────────────────────────────┐
│     AI Layer                        │
│  (Sapiens 0.27B engine)            │
└─────────────────────────────────────┘
              │
┌─────────────────────────────────────┐
│     Kernel Layer                    │
│  (Linux kernel + AI enhancements)  │
└─────────────────────────────────────┘

Component Overview

Kernel Layer

  • Base: Linux kernel 6.1+
  • Enhancements:
    • AI-aware process scheduling
    • Resource management for AI workloads
    • Enhanced memory management
    • Real-time capabilities

Location: /usr/src/linux-cortex/

AI Layer

  • Engine: Sapiens 0.27B reasoning model
  • Runtime: Custom inference engine (C++)
  • Memory Management: Efficient model loading and caching
  • API: C API for system integration

Location: /usr/lib/cortex-ai/

Service Layer

  • HTTP API Server: RESTful API on port 8080
  • CLI Tool: cortex-ai command-line interface
  • Systemd Services: Background AI services
  • Configuration: YAML-based configuration

Location: /usr/bin/cortex-ai, /etc/cortex-ai/

Application Layer

  • Standard Linux Userland: Core utilities, package manager
  • Development Tools: Compilers, debuggers, build tools
  • Package Management: APT-based (Debian/Ubuntu compatible)

API Reference

C API (AI Engine)

The core AI engine exposes a C API for system-level integration.

Header: cortex_ai.h

#ifndef CORTEX_AI_H
#define CORTEX_AI_H

#include <stddef.h>
#include <stdint.h>

#ifdef __cplusplus
extern "C" {
#endif

// Error codes
typedef enum {
    CORTEX_SUCCESS = 0,
    CORTEX_ERROR_INVALID_PARAM = -1,
    CORTEX_ERROR_MODEL_NOT_LOADED = -2,
    CORTEX_ERROR_OUT_OF_MEMORY = -3,
    CORTEX_ERROR_TIMEOUT = -4
} cortex_error_t;

// Handle for AI engine instance
typedef void* cortex_handle_t;

// Initialize AI engine
cortex_error_t cortex_init(cortex_handle_t* handle, const char* model_path);

// Cleanup
void cortex_cleanup(cortex_handle_t handle);

// Perform reasoning
cortex_error_t cortex_reason(
    cortex_handle_t handle,
    const char* query,
    char* output,
    size_t output_size,
    uint32_t* tokens_used
);

// Get model information
cortex_error_t cortex_get_model_info(
    cortex_handle_t handle,
    char* name,
    size_t name_size,
    uint32_t* param_count
);

#ifdef __cplusplus
}
#endif

#endif // CORTEX_AI_H

Usage Example

#include <cortex_ai.h>
#include <stdio.h>
#include <stdlib.h>

int main() {
    cortex_handle_t handle;
    cortex_error_t err;
    char output[1024];
    uint32_t tokens;
    
    // Initialize
    err = cortex_init(&handle, "/usr/lib/cortex-ai/models/sapiens-0.27b");
    if (err != CORTEX_SUCCESS) {
        fprintf(stderr, "Failed to initialize: %d\n", err);
        return 1;
    }
    
    // Perform reasoning
    err = cortex_reason(handle, "What is 2+2?", output, sizeof(output), &tokens);
    if (err == CORTEX_SUCCESS) {
        printf("Result: %s\n", output);
        printf("Tokens used: %u\n", tokens);
    }
    
    // Cleanup
    cortex_cleanup(handle);
    return 0;
}

Python API

See AI Integration Guide - Python Integration for Python API documentation.

HTTP API

See AI Integration Guide - HTTP API for HTTP API documentation.


Contributing Guidelines

Code of Conduct

  • Be respectful and inclusive
  • Welcome newcomers and help them learn
  • Focus on constructive feedback
  • Respect different viewpoints

Development Workflow

  1. Fork the Repository

    git clone https://github.com/cortexlinux/cortex.git
    cd cortex
  2. Create a Branch

    git checkout -b feature/your-feature-name
    # or
    git checkout -b fix/your-bug-fix
  3. Make Changes

    • Follow coding standards (see below)
    • Write tests for new features
    • Update documentation
  4. Test Your Changes

    make test
    # or
    pytest tests/
  5. Commit Changes

    git add .
    git commit -m "feat: add new feature description"

    Commit message format:

    • feat: New feature
    • fix: Bug fix
    • docs: Documentation changes
    • test: Test additions/changes
    • refactor: Code refactoring
    • perf: Performance improvements
  6. Push and Create Pull Request

    git push origin feature/your-feature-name

Coding Standards

C/C++ Code

// Use 4 spaces for indentation
// Maximum line length: 100 characters
// Function names: snake_case
// Constants: UPPER_SNAKE_CASE

// Example
int cortex_process_query(const char* query, cortex_result_t* result) {
    if (query == NULL || result == NULL) {
        return CORTEX_ERROR_INVALID_PARAM;
    }
    
    // Implementation
    return CORTEX_SUCCESS;
}

Python Code

# Follow PEP 8
# Use 4 spaces for indentation
# Maximum line length: 100 characters
# Function names: snake_case
# Class names: PascalCase

# Example
def process_query(query: str, context: Optional[Dict] = None) -> CortexResult:
    """Process a query through the AI engine.
    
    Args:
        query: The query string to process
        context: Optional context dictionary
        
    Returns:
        CortexResult object with the response
    """
    if not query:
        raise ValueError("Query cannot be empty")
    
    # Implementation
    return result

Documentation

  • All public APIs must have documentation
  • Use docstrings for Python functions
  • Use Doxygen-style comments for C/C++
  • Include examples in documentation

Testing Requirements

  • Unit Tests: Required for all new functions
  • Integration Tests: Required for new features
  • Coverage: Aim for 80%+ code coverage
  • CI/CD: All tests must pass before merge
# Run tests
make test

# Run with coverage
make test-coverage

# Run specific test suite
pytest tests/test_ai_engine.py

Building from Source

Prerequisites

# Install build dependencies
sudo apt update
sudo apt install -y \
    build-essential \
    cmake \
    ninja-build \
    git \
    python3 \
    python3-pip \
    libssl-dev \
    libcurl4-openssl-dev \
    pkg-config

Clone Repository

git clone https://github.com/cortexlinux/cortex.git
cd cortex
git submodule update --init --recursive

Build AI Engine

cd cortex-ai-engine
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
sudo make install

Build CLI Tool

cd cortex-cli
cargo build --release
sudo cp target/release/cortex-ai /usr/local/bin/

Build HTTP API Server

cd cortex-api-server
go build -o cortex-api-server
sudo cp cortex-api-server /usr/local/bin/

Build Complete System

# From repository root
./build.sh --all

# Or build specific components
./build.sh --engine
./build.sh --cli
./build.sh --api

Install Development Dependencies

# Python dependencies
pip install -r requirements-dev.txt

# Rust toolchain (for CLI)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Go toolchain (for API server)
# Download from https://go.dev/dl/

Package Building

Debian Package

# Install packaging tools
sudo apt install -y debhelper dh-make devscripts

# Create package structure
cd cortex
debuild -us -uc

# Result: ../cortex-ai_1.0.0_amd64.deb

RPM Package

# Install packaging tools
sudo yum install -y rpm-build rpmdevtools

# Setup build environment
rpmdev-setuptree

# Create spec file (see packaging/cortex-ai.spec)
rpmbuild -ba packaging/cortex-ai.spec

# Result: ~/rpmbuild/RPMS/x86_64/cortex-ai-1.0.0-1.x86_64.rpm

Creating Distribution Packages

# Build all packages
./scripts/build-packages.sh

# Packages created in dist/ directory:
# - cortex-ai_1.0.0_amd64.deb
# - cortex-ai-1.0.0-1.x86_64.rpm
# - cortex-ai-1.0.0.tar.gz

Kernel Modifications

Kernel Source Location

# Kernel source
/usr/src/linux-cortex/

# Kernel configuration
/usr/src/linux-cortex/.config

Building Custom Kernel

cd /usr/src/linux-cortex

# Configure kernel
make menuconfig
# or
make xconfig

# Build kernel
make -j$(nproc)

# Build modules
make modules

# Install
sudo make modules_install
sudo make install

Kernel Modules

Cortex-specific kernel modules:

  • cortex_scheduler.ko: AI-aware process scheduling
  • cortex_memory.ko: Enhanced memory management for AI workloads
  • cortex_monitor.ko: System monitoring and metrics

Building a Module

// cortex_scheduler.c
#include <linux/module.h>
#include <linux/kernel.h>

static int __init cortex_scheduler_init(void) {
    printk(KERN_INFO "Cortex scheduler module loaded\n");
    // Initialization code
    return 0;
}

static void __exit cortex_scheduler_exit(void) {
    printk(KERN_INFO "Cortex scheduler module unloaded\n");
    // Cleanup code
}

module_init(cortex_scheduler_init);
module_exit(cortex_scheduler_exit);

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Cortex Linux Team");
MODULE_DESCRIPTION("AI-aware process scheduler");
# Makefile
obj-m += cortex_scheduler.o

KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)

all:
	$(MAKE) -C $(KDIR) M=$(PWD) modules

clean:
	$(MAKE) -C $(KDIR) M=$(PWD) clean
# Build module
make

# Load module
sudo insmod cortex_scheduler.ko

# Check module
lsmod | grep cortex

# Unload module
sudo rmmod cortex_scheduler

Security Model

Process Isolation

  • AI engine runs in isolated process with limited privileges
  • Systemd service runs as dedicated user: cortex-ai
  • No root access required for AI operations

File System Permissions

# AI engine files
/usr/lib/cortex-ai/          # Read-only for users
/etc/cortex-ai/              # Config: root-owned, 644
/var/log/cortex-ai/          # Logs: cortex-ai user, 640
/var/cache/cortex-ai/        # Cache: cortex-ai user, 750

Network Security

  • HTTP API binds to localhost by default
  • Optional API key authentication
  • Rate limiting to prevent abuse
  • CORS configuration for web applications

Code Signing

All packages are signed with GPG:

# Verify package signature
dpkg-sig --verify cortex-ai_1.0.0_amd64.deb

# Import signing key
gpg --import cortex-linux-signing-key.asc

Security Best Practices

  1. Never run AI engine as root
  2. Use API keys in production
  3. Restrict network access to API
  4. Regular security updates
  5. Monitor logs for suspicious activity
# Check security status
cortex-ai security-check

# Review security configuration
cat /etc/cortex-ai/security.yaml

Testing

Unit Tests

# Run all unit tests
make test

# Run specific test suite
cd cortex-ai-engine
./tests/run_tests.sh

# Run with verbose output
make test VERBOSE=1

Integration Tests

# Test HTTP API
cd cortex-api-server
go test ./...

# Test CLI
cd cortex-cli
cargo test

# End-to-end tests
./tests/integration/test_e2e.sh

Performance Tests

# Benchmark AI engine
./benchmarks/benchmark_engine.sh

# Load testing
./tests/load/load_test.sh --requests 1000 --concurrent 10

Test Coverage

# Generate coverage report
make test-coverage

# View report
open coverage/index.html

Debugging

Debugging AI Engine

# Enable debug logging
export CORTEX_LOG_LEVEL=DEBUG
cortex-ai reason "Test query"

# Use gdb
gdb --args cortex-ai reason "Test query"
(gdb) break cortex_reason
(gdb) run

Debugging HTTP API

# Enable debug mode
cortex-api-server --debug --log-level debug

# Use Delve (Go debugger)
dlv debug ./cortex-api-server
(dlv) break main.main
(dlv) continue

Kernel Debugging

# Enable kernel debugging
echo 8 > /proc/sys/kernel/printk

# View kernel messages
dmesg | tail -100

# Use kgdb for remote debugging
# See kernel documentation

Profiling

# Profile AI engine
perf record -g cortex-ai reason "Test query"
perf report

# Memory profiling (Valgrind)
valgrind --leak-check=full cortex-ai reason "Test query"

Development Tools

Recommended IDE Setup

VS Code:

{
  "C_Cpp.default.includePath": [
    "/usr/include/cortex-ai"
  ],
  "python.linting.enabled": true,
  "python.linting.pylintEnabled": true
}

CLion: Import CMake project from cortex-ai-engine/

Code Formatting

# Format C code (clang-format)
clang-format -i src/**/*.c src/**/*.h

# Format Python code (black)
black cortex-python-sdk/

# Format Rust code
cargo fmt

Linting

# Lint C code
cppcheck src/

# Lint Python code
pylint cortex-python-sdk/

# Lint Rust code
cargo clippy

Release Process

Version Numbering

Follow Semantic Versioning:

  • MAJOR: Incompatible API changes
  • MINOR: New features (backward compatible)
  • PATCH: Bug fixes (backward compatible)

Release Checklist

  1. Update version numbers
  2. Update CHANGELOG.md
  3. Run full test suite
  4. Build packages
  5. Sign packages
  6. Create GitHub release
  7. Publish packages to repository
# Release script
./scripts/release.sh 1.0.1

Getting Help


Last updated: 2024

Clone this wiki locally