The Internet of Things (IoT) paradigm is driving a massive increase in the generation of multimodal data on tiny, resource-constrained devices at the far edge of the computing infrastructure. Given the challenges of limited computation, energy constraints, and communication bottlenecks, there is a growing need to process data locally while ensuring efficient and scalable artificial intelligence at the edge. TinyML and Edge AI have demonstrated the feasibility of embedding machine learning models on such devices. Still, many challenges are ahead. Expanding their impact in real-world deployments requires addressing the heterogeneity of hardware, data, and resource availability in distributed scenarios. This research will explore novel approaches for enabling AI at the edge, focusing on one or all of the following aspects: i) hardware-aware scaling, model compression, and novel approaches for diverse low-power edge devices in distributed and collaborative IoT scenarios; ii) strategies for efficient and adaptive learning on-device or across a network of heterogeneous nodes while minimizing energy consumption and bandwidth usage; iii) investigating how explainability and robustness can be maintained in compressed models deployed at the far edge, ensuring trustworthiness and reliability in real-world applications This interdisciplinary research at the intersection of Artificial Intelligence, Embedded Systems, Distributed Computing, and Low-power Hardware and protocols will be tailored to the candidate’s profile and interests, contributing to the development of innovative solutions for real-world challenges.