bE-More is an advanced, semi-autonomous IoT system engineered to optimize energy consumption within enterprise working environments. The architecture integrates embedded hardware for real-time telemetry, a Java-based Dashboard for centralized management, and Local Generative AI for predictive data analysis.
- 📡 IoT & MQTT Telemetry: Real-time, low-latency communication between edge sensors and the centralized ThingsBoard cloud instance.
- ⚙️ Smart Automation: Rule-based environmental control (e.g., automatic lighting shutdown when ambient natural light exceeds a
> 450threshold). - 🔒 Privacy-First AI Analytics: Utilizes a locally hosted instance of Mistral:7b (via Ollama) to analyze consumption trends and detect anomalies, ensuring sensitive corporate data never leaves the internal network.
- 🖥️ Hybrid Interface: A robust Java 23 application that seamlessly embeds a visual web dashboard alongside a console-based AI assistant.
The physical layer relies on an Arduino microcontroller handling sensor data acquisition and MQTT transmission.
The embedded system manages the environment based on the following deterministic rules:
| Input Sensor / Actuator | State | System Action |
|---|---|---|
| "AUTO" Button (Pin 2) | Pressed | Toggles Autonomous Mode. Engages status LED and confirmation Buzzer. |
| "LED" Button (Pin 1) | Pressed | Manual override to toggle Main Workspace LEDs (Pin 5). |
| Photoresistor (A3) | > 450 + Auto Mode ON |
Energy Saver: Automatically powers down Main LEDs to reduce consumption. |
The project features a decoupled, multi-tier software architecture:
Developed in Java 23, this application acts as the central operations hub:
- WebView Integration: Natively embeds the local ThingsBoard dashboard (port 8080) for real-time data visualization.
- Process Bridging: Manages the lifecycle and communication with the Python-based AI backend via console streams.
A Python service acting as the middleware between the IoT data and the Generative AI:
- Data Ingestion: Fetches historical telemetry and state changes from the ThingsBoard API.
- Prompt Engineering: Formats the raw time-series data into contextual prompts optimized for Ollama (Mistral:7b).
- Inference: The LLM processes the data locally to identify inefficiencies, predict trends, and return actionable energy-saving insights directly to the Java console.
- Java JDK 23
- ThingsBoard CE (configured on
localhost:8080) - Ollama with the Mistral model installed:
ollama pull mistral
- Hardware Provisioning: Wire the components according to the schematic and flash the provided C++ sketch to the Arduino.
- IoT Platform: Configure the MQTT Device profile and dashboards within your ThingsBoard instance.
- Initialize AI Service: Start the local Ollama inference server:
ollama serve
- Launch the Hub: Compile and execute the Java application to monitor and optimize your environment.
For detailed wiring schematics, full architectural diagrams, and step-by-step guides, please refer to the Project Wiki:
- 📄 Extended Hardware & Wiring Guide
- 📄 Full Software Architecture & API Spec
- 🇮🇹 Documentazione in Italiano
Architected and Developed by GiZano

