A Practical Guide to AI, Python, and Hardware Projects
Welcome to your BeagleY-AI journey! This compact, powerful, and affordable single-board computer is perfect for developers and hobbyists. With its dedicated 4 TOPS AI co-processor and a 1.4 GHz Quad-core Cortex-A53 CPU, the BeagleY-AI is equipped to handle both AI applications and real-time I/O tasks. Powered by the Texas Instruments AM67A processor, it offers DSPs, a 3D graphics unit, and video accelerators.
Inside this handbook, you‘ll find over 50 hands-on projects that cover a wide range of topics—from basic circuits with LEDs and sensors to an AI-driven project. Each project is written in Python 3 and includes detailed explanations and full program listings to guide you. Whether you‘re a beginner or more advanced, you can follow these projects as they are or modify them to fit your own creative ideas.
Here’s a glimpse of some exciting projects included in this handbook:
Morse Code Exerciser with LED or BuzzerType a message and watch it come to life as an LED or buzzer translates your text into Morse code.
Ultrasonic Distance MeasurementUse an ultrasonic sensor to measure distances and display the result in real time.
Environmental Data Display & VisualizationCollect temperature, pressure, and humidity readings from the BME280 sensor, and display or plot them on a graphical interface.
SPI – Voltmeter with ADCLearn how to measure voltage using an external ADC and display the results on your BeagleY-AI.
GPS Coordinates DisplayTrack your location with a GPS module and view geographic coordinates on your screen.
BeagleY-AI and Raspberry Pi 4 CommunicationDiscover how to make your BeagleY-AI and Raspberry Pi communicate over a serial link and exchange data.
AI-Driven Object Detection with TensorFlow LiteSet up and run an object detection model using TensorFlow Lite on the BeagleY-AI platform, with complete hardware and software details provided.
A Practical Guide to AI, Python, and Hardware Projects
Welcome to your BeagleY-AI journey! This compact, powerful, and affordable single-board computer is perfect for developers and hobbyists. With its dedicated 4 TOPS AI co-processor and a 1.4 GHz Quad-core Cortex-A53 CPU, the BeagleY-AI is equipped to handle both AI applications and real-time I/O tasks. Powered by the Texas Instruments AM67A processor, it offers DSPs, a 3D graphics unit, and video accelerators.
Inside this handbook, you‘ll find over 50 hands-on projects that cover a wide range of topics—from basic circuits with LEDs and sensors to an AI-driven project. Each project is written in Python 3 and includes detailed explanations and full program listings to guide you. Whether you‘re a beginner or more advanced, you can follow these projects as they are or modify them to fit your own creative ideas.
Here’s a glimpse of some exciting projects included in this handbook:
Morse Code Exerciser with LED or BuzzerType a message and watch it come to life as an LED or buzzer translates your text into Morse code.
Ultrasonic Distance MeasurementUse an ultrasonic sensor to measure distances and display the result in real time.
Environmental Data Display & VisualizationCollect temperature, pressure, and humidity readings from the BME280 sensor, and display or plot them on a graphical interface.
SPI – Voltmeter with ADCLearn how to measure voltage using an external ADC and display the results on your BeagleY-AI.
GPS Coordinates DisplayTrack your location with a GPS module and view geographic coordinates on your screen.
BeagleY-AI and Raspberry Pi 4 CommunicationDiscover how to make your BeagleY-AI and Raspberry Pi communicate over a serial link and exchange data.
AI-Driven Object Detection with TensorFlow LiteSet up and run an object detection model using TensorFlow Lite on the BeagleY-AI platform, with complete hardware and software details provided.
Traitement du signal numérique simple et abordable
Le but de cet ouvrage est d'enseigner les principes de base du Traitement Numérique du Signal (DSP) et de l'introduire d'un point de vue pratique en utilisant le strict minimum de mathématiques. Seul le niveau de base de la théorie des systèmes à temps discret est donné, suffisant pour implémenter des applications DSP en temps réel. Les implémentations pratiques sont décrites en temps réel à l'aide de la très populaire carte de développement de microcontrôleur ESP32 DevKitC. Avec le microcontrôleur ESP32, peu coûteux et extrêmement populaire, vous devriez être en mesure de concevoir des projets DSP élémentaires avec des fréquences d'échantillonnage comprises dans la plage audio. Toute la programmation est effectuée à l'aide du populaire IDE Arduino en conjonction avec le compilateur en langage C.
Après avoir posé une base solide de la théorie DSP et des discussions pertinentes sur les principaux outils logiciels DSP du marché, le livre présente les projets audio et DSP suivants :
Utilisation d'un microphone numérique basé sur I²S pour capturer le son audio
Utilisation d'un amplificateur audio et d'un haut-parleur de classe D basés sur I²S
Lecture de musique MP3 stockée sur une carte SD via un amplificateur et un haut-parleur basés sur I²S
Lecture de fichiers de musique MP3 stockés dans la mémoire flash ESP32 via un amplificateur et un haut-parleur basés sur I²S
Radio Internet mono et stéréo avec amplificateurs et haut-parleurs basés sur I²S
Sortie de synthèse vocale avec un amplificateur et un haut-parleur basés sur I²S
Utilisation du contrôle du volume dans les systèmes d'amplificateurs et de haut-parleurs basés sur I²S
Un compteur d'événements parlants avec un amplificateur et un haut-parleur basés sur I²S
Un générateur d'onde sinusoïdale réglable avec amplificateur et haut-parleur basés sur I²S
Utilisation du module ADC/DAC rapide 24 bits Pmod I²S2
Conception de filtre FIR numérique passe-bas et passe-bande en temps réel avec conversion A/D et D/A externe et interne
Conception de filtre IIR numérique passe-bas et passe-bande en temps réel avec conversion A/D et D/A externe et interne
Transformations de Fourier rapides (FFT)
AI Projects for the Raspberry Pi with the AI HAT+
Edge AI is transforming everyday devices by putting intelligence where it matters most: directly inside the hardware. With on-device inference, a camera can recognize a visitor instantly, a phone can translate speech without streaming audio to the cloud, and a wearable can detect anomalies in real time—fast, private, and reliable even when the network disappears.
This book is your practical guide to building exactly those kinds of systems with the Raspberry Pi AI HAT+ and the Hailo-8L accelerator. You’ll start with clear foundations: core AI and machine-learning concepts, how neural networks work, and what truly distinguishes Edge AI from cloud AI—plus an honest look at ethical considerations and future impacts.
Then it’s straight to hands-on physical computing. Step by step, you’ll set up Raspberry Pi OS, power and cooling, and develop in Python using the Thonny IDE. You’ll learn GPIO basics with lights and servos, mount the AI HAT+ hardware, install and verify the Hailo software stack, and connect the right camera—official modules or USB webcams, even multiple cameras.
From your first pipeline to real projects, you’ll run person detection, pose estimation, segmentation, and depth estimation, then level up with YOLO object detection: smart alerts, guest counters, and custom extensions. You’ll even connect vision to motion by combining gesture recognition with servo-driven mechanisms, including a robotic arm.
With troubleshooting tips, hardware essentials, and a practical Python refresher, this book turns Edge AI from buzzword into buildable reality.
This collection features the best of Elektor Magazine's articles on embedded systems and artificial intelligence. From hands-on programming guides to innovative AI experiments, these pieces offer valuable insights and practical knowledge for engineers, developers, and enthusiasts exploring the evolving intersection of hardware design, software innovation, and intelligent technology.
Contents
Programming PICs from the Ground UpAssembler routine to output a sine wave
Object-Oriented ProgrammingA Short Primer Using C++
Programming an FPGA
Tracking Down Microcontroller Buffer Overflows with 0xDEADBEEF
Too Quick to Code and Too Slow to Test?
Understanding the Neurons in Neural NetworksEmbedded Neurons
MAUI Programming for PC, Tablet, and SmartphoneThe New Framework in Theory and Practice
USB Killer DetectorBetter Safe Than Sorry
Understanding the Neurons in Neural NetworksArtificial Neurons
A Bare-Metal Programming Guide
Part 1: For STM32 and Other Controllers
Part 2: Accurate Timing, the UART, and Debugging
Part 3: CMSIS Headers, Automatic Testing, and a Web Server
Introduction to TinyMLBig Is Not Always Better
Microprocessors for Embedded SystemsPeculiar Parts, the Series
FPGAs for BeginnersThe Path From MCU to FPGA Programming
AI in Electronics DevelopmentAn Update After Only One Year
AI in the Electronics LabGoogle Bard and Flux Copilot Put to the Test
ESP32 and ChatGPTOn the Way to a Self-Programming System…
Audio DSP FX Processor Board
Part 1: Features and Design
Part 2: Creating Applications
Rust + EmbeddedA Development Power Duo
A Smart Object CounterImage Recognition Made Easy with Edge Impulse
Universal Garden LoggerA Step Towards AI Gardening
A VHDL ClockMade with ChatGPT
TensorFlow Lite on Small MicrocontrollersA (Very) Beginner’s Point of View
Mosquito DetectionUsing Open Datasets and Arduino Nicla Vision
Artificial Intelligence Timeline
Intro to AI AlgorithmsPrompt: Which Algorithms Implement Each AI Tool?
Bringing AI to the Edgewith ESP32-P4
The Growing Role of Edge AIA Trend Shaping the Future
Le téléchargement intégral de ce numéro est disponible pour nos membres GOLD et GREEN sur le site Elektor Magazine !
Pas encore membre ? Cliquez ici.
le système de sécurité IA AlertAlfredBasé sur un Raspberry Pi 5 et le module Hailo 8L
l'IA en développement électroniqueune mise à jour après seulement un an
intro aux algorithmes de l'IAPrompt: Quels algorithmes implémentent chaque outil d'IA ?
ordinateurs monocartes pour les projets d'IAAperçu et contexte
des données de capteurs aux modèles d'apprentissage automatiqueDétection de gestes avec Edge Impulse et un accéléromètre
créez un neurone d’intégrationet-tir avec fuiteIntelligence artificielle sans logiciel
ChatGPT pour la conception électroniqueGPT-4o fait-il mieux ?
intégrer l'IA périphérique avec l'ESP32-P4
fonctions vocales sur le Raspberry Pi ZeroWhen Overclocking Gives Freedom of Speech
le rôle croissant de l'IA périphériqueUne tendance qui structure l'avenir
exploiter la puissance de l'IA en périphérieUn entretien avec François de Rochebouët de STMicroelectronics
horloge en VHDL réalisée avec ChatGPT
l'impact réel de l'IASayash Kapoor à propos des "faux miracles de l'IA" et plus encore
les dernières nouveautés de BeagleBoardBeagleY-AI, BeagleV-Fire, BeagleMod, BeaglePlay et BeagleConnect Freedom
détection des moustiques : avec Arduino Nicla Vision et des données open source
l'IA d’aujourd'hui et de demain : les idées d'Espressif, d'Arduino et de SparkFun
chronologie de l'intelligence artificielle
BeagleY-AIThe Latest SBC for AI Applications
lumière sur l’IALes perspectives de la communauté Elektor
vision artificielle avec OpenMVCréer un détecteur de canettes de soda
conversation avec l'esprit numériqueChatGPT vs Gemini
"Skilling Me Softley with This Bot?"L'essor de l'IA dans le secteur électronique freiné par une absence de précision sociale ?
AI Projects for the Raspberry Pi with the AI HAT+
Edge AI is transforming everyday devices by putting intelligence where it matters most: directly inside the hardware. With on-device inference, a camera can recognize a visitor instantly, a phone can translate speech without streaming audio to the cloud, and a wearable can detect anomalies in real time—fast, private, and reliable even when the network disappears.
This book is your practical guide to building exactly those kinds of systems with the Raspberry Pi AI HAT+ and the Hailo-8L accelerator. You’ll start with clear foundations: core AI and machine-learning concepts, how neural networks work, and what truly distinguishes Edge AI from cloud AI—plus an honest look at ethical considerations and future impacts.
Then it’s straight to hands-on physical computing. Step by step, you’ll set up Raspberry Pi OS, power and cooling, and develop in Python using the Thonny IDE. You’ll learn GPIO basics with lights and servos, mount the AI HAT+ hardware, install and verify the Hailo software stack, and connect the right camera—official modules or USB webcams, even multiple cameras.
From your first pipeline to real projects, you’ll run person detection, pose estimation, segmentation, and depth estimation, then level up with YOLO object detection: smart alerts, guest counters, and custom extensions. You’ll even connect vision to motion by combining gesture recognition with servo-driven mechanisms, including a robotic arm.
With troubleshooting tips, hardware essentials, and a practical Python refresher, this book turns Edge AI from buzzword into buildable reality.
A Beginner's Guide to AI and Edge Computing
Artificial Intelligence (AI) is now part of our daily lives. With companies developing low-cost AI-powered hardware into their products, it is now becoming a reality to purchase AI accelerator hardware at comparatively very low costs. One such hardware accelerator is the Hailo module which is fully compatible with the Raspberry Pi 5. The Raspberry Pi AI Kit is a cleverly designed hardware as it bundles an M.2-based Hailo-8L accelerator with the Raspberry Pi M.2 HAT+ to offer high speed inferencing on the Raspberry Pi 5. Using the Raspberry Pi AI Kit, you can build complex AI-based vision applications, running in real-time, such as object detection, pose estimation, instance segmentation, home automation, security, robotics, and many more neural network-based applications.
This book is an introduction to the Raspberry Pi AI Kit, and it is aimed to provide some help to readers who are new to the kit and wanting to run some simple AI-based visual models on their Raspberry Pi 5 computers. The book is not meant to cover the detailed process of model creation and compilation, which is done on an Ubuntu computer with massive disk space and 32 GB memory. Examples of pre-trained and custom object detection are given in the book.
Two fully tested and working projects are given in the book. The first project explains how a person can be detected and how an LED can be activated after the detection, and how the detection can be acknowledged by pressing an external button. The second project illustrates how a person can be detected, and how this information can be passed to a smart phone over a Wi-Fi link, as well as how the detection can be acknowledged by sending a message from the smartphone to your Raspberry Pi 5.
Le reComputer J1020 v2 est un appareil d'IA de pointe compact alimenté par NVIDIA Jetson Nano 4 Go, offrant 0,5 TFLOP de performances d'IA. Il est doté d'un boîtier en aluminium robuste avec un dissipateur thermique passif et est livré préinstallé avec JetPack 4.6.1. L'appareil comprend 16 Go de stockage eMMC intégré et offre 2x SCI, 4x USB 3.0, M.2 Key M, HDMI et DP.
Applications
Vision par ordinateur
Apprentissage automatique
Robot mobile autonome (AMR)
Spécifications
Jetson Nano 4 Go System-on-Module
Performances de l'IA
Jetson Nano 4 Go (0,5 TOPS)
GPU
Architecture NVIDIA Maxwel avec 128 cœurs NVIDIA CUDA
Processeur
Processeur ARM Cortex-A57 MPCore quadricœur
Mémoire
4 Go LPDDR4 64 bits 25,6 Go/s
Encodeur vidéo
1x 4K30 | 2x 1080p60 | 4x 1080p30 | 4x 720p60 | 9x 720p30 (H.265 et H.264)
Décodeur vidéo
1x 4K60 | 2x 4K30 | 4x 1080p60 | 8x 1080p30 | 9x 720p60 (H.265 et H.264)
Carrier Board
Stockage
1x M.2 Key M PCIe
Mise en réseau
Ethernet
1x RJ-45 Gigabit Ethernet (10/100/1000M)
E/S
USB
4x USB 3.0 Type-A1x Port micro-USB pour le mode appareil
Caméra CSI
2x CSI (2 voies, 15 broches)
Affichage
1x HDMI Type A ; 1x DP
Ventilateur
1x Connecteur de ventilateur à 4 broches (5 V PWM)
CAN
1x CAN
Port multifonctionnel
1x Connecteur d'extension à 40 broches
1x Contrôle à 12 broches et en-tête UART
Alimentation
CC 12 V/2 A
Mécanique
Dimensions
130 x 120 x 50 mm (avec boîtier)
Installation
Bureau, montage mural
Température de fonctionnement
−10°C~60°C
Inclus
reComputer J1020 v2 (système installé)
Adaptateur secteur 12 V/2 A (avec 5 fiches d'adaptateur interchangeables)
Téléchargements
reComputer J1020 v2 datasheet
reComputer J1020 v2 3D file
Seeed NVIDIA Jetson Product Catalog
NVIDIA Jetson Device and Carrier Boards Comparison
Learn programming for Alexa devices, extend it to smart home devices and control the Raspberry Pi
The book is split into two parts: the first part covers creating Alexa skills and the second part, designing Internet of Things and Smart Home devices using a Raspberry Pi.
The first chapters describe the process of Alexa communication, opening an Amazon account and creating a skill for free. The operation of an Alexa skill and terminology such as utterances, intents, slots, and conversations are explained. Debugging your code, saving user data between sessions, S3 data storage and Dynamo DB database are discussed.
In-skill purchasing, enabling users to buy items for your skill as well as certification and publication is outlined. Creating skills using AWS Lambda and ASK CLI is covered, along with the Visual Studio code editor and local debugging. Also covered is the process of designing skills for visual displays and interactive touch designs using Alexa Presentation Language.
The second half of the book starts by creating a Raspberry Pi IoT 'thing' to control a robot from your Alexa device. This covers security issues and methods of sending and receiving MQTT messages between an Alexa device and the Raspberry Pi.
Creating a smart home device is described including forming a security profile, linking with Amazon, and writing a Lambda function that gets triggered by an Alexa skill. Device discovery and on/off control is demonstrated.
Next, readers discover how to control a smart home Raspberry Pi display from an Alexa skill using Simple Queue Service (SQS) messaging to switch the display on and off or change the color.
A node-RED design is discussed from the basic user interface right up to configuring MQTT nodes. MQTT messages sent from a user are displayed on a Raspberry Pi.
A chapter discusses sending a proactive notification such as a weather alert from a Raspberry Pi to an Alexa device. The book concludes by explaining how to create Raspberry Pi as a stand-alone Alexa device.
ModbusRTU and ModbusTCP examples with the Arduino Uno and ESP8266
Introduction to PLC programming with OpenPLC, the first fully open source Programmable Logic Controller on the Raspberry Pi, and Modbus examples with Arduino Uno and ESP8266
PLC programming is very common in industry and home automation. This book describes how the Raspberry Pi 4 can be used as a Programmable Logic Controller. Before taking you into the programming, the author starts with the software installation on the Raspberry Pi and the PLC editor on the PC, followed by a description of the hardware.
You'll then find interesting examples in the different programming languages complying with the IEC 61131-3 standard. This manual also explains in detail how to use the PLC editor and how to load and execute the programs on the Raspberry Pi. All IEC languages are explained with examples, starting with LD (Ladder Diagram) over ST (Structured Control Language) to SFC (Special Function Chart). All examples can be downloaded from the author's website.
Networking gets thorough attention too. The Arduino Uno and the ESP8266 are programmed as ModbusRTU or ModbusTCP modules to get access to external peripherals, reading sensors and switching electrical loads. I/O circuits complying with the 24 V industry standard may also be of interest for the reader.
The book ends with an overview of commands for ST and LD. After reading the book, the reader will be able to create his own controllers with the Raspberry Pi.
La plupart des gens sont de plus en plus confrontés aux applications de l’intelligence artificielle (IA). Les classements de musique ou de vidéo, les systèmes de navigation, les conseils d'achat, etc. reposent sur des méthodes qui peuvent être attribuées à ce domaine.
Le terme intelligence artificielle a été inventé en 1956 lors d’une conférence internationale connue sous le nom de Dartmouth Summer Research Project. Une approche fondamentale consistait à modéliser le fonctionnement du cerveau humain et à construire des systèmes informatiques avancés sur cette base. Bientôt, le fonctionnement de l’esprit humain devrait être clair. Le transférer sur une machine n’était considéré qu’une petite étape. Cette notion s'est avérée un peu trop optimiste. Néanmoins, les progrès de l’IA moderne, ou plutôt de sa sous-spécialité appelée Machine Learning (ML), ne peuvent plus être niés.
Dans ce livre, plusieurs systèmes différents seront utilisés pour connaître plus en détail les méthodes d’apprentissage automatique. En plus du PC, le Raspberry Pi et le Maixduino démontreront leurs capacités dans les différents projets. Outre des applications telles que la reconnaissance d'objets et de visages, des systèmes pratiques tels que des détecteurs de bouteilles, des compteurs de personnes ou un « œil qui parle » seront également créés.
Ce dernier est capable de décrire acoustiquement des objets ou des visages détectés automatiquement. Par exemple, si un véhicule se trouve dans le champ de vision de la caméra connectée, l'information « Je vois une voiture ! est émis via une parole générée électroniquement. De tels appareils sont des exemples très intéressants de la manière dont, par exemple, les personnes aveugles ou gravement malvoyantes peuvent également bénéficier des systèmes d’IA.