The global workforce is facing critical staff shortages due to aging populations and declining birth rates. This demographic shift threatens productivity, especially in fields like computer science and cybersecurity, where demand for skilled professionals is rising alongside system complexity and regulatory requirements (Piras et al. 2020). As ICT companies expand, the need for software and security engineers grows, yet programming remains a time-consuming and intricate task.
To sustain productivity, automation and optimisation are essential. These strategies streamline the software engineering cycle by enabling higher abstraction levels, reducing time and resource demands, and improving code security. AI offers promising support, such as tools that generate code (Vaithilingam et al. 2022). However, current AI-generated code often contains errors and lacks scalability, requiring manual debugging and refinement. While AI can produce valid snippets, it struggles to create complete, secure systems at scale.
A promising solution is decomposing complex systems into smaller sub-problems, allowing AI to generate components more effectively. Goal Modeling (GM) techniques are well-established for this purpose, enabling hierarchical decomposition of requirements (Horkoff et al. 2019). GM and visual programming help developers work at higher abstraction levels, potentially accelerating design.
We hypothesize that combining GM with AI can democratize software development, enabling broader participation beyond traditional programmers. Integrating this approach with Symbolic AI, Neuro-Symbolic AI, Agentic AI, Neural Networks for code vulnerability detection (Senanayake et al. 2024), SBOM tools, prompt vulnerability detectors, and static/dynamic analysis tools could enhance code security and regulatory compliance (Piras et al. 2020). These synergies could empower diverse professionals to contribute to software engineering, helping address workforce shortages while building secure, high-quality systems.
This research proposes designing a secure, AI-enhanced goal-modeling framework to support rapid software development with reduced skill requirements. By “secure,” we refer to integrating software security measures and ensuring compliance with regulations like the EU AI Act, NIS2 and GDPR (Piras et al. 2020). The research may focus on conceptualizing and prototyping this framework, identifying key components, and evaluating it in realistic or industrial settings, potentially in collaboration with industry partners.