30 March 2026
As artificial intelligence becomes increasingly embedded in trading and investment processes, a fundamental question is emerging for the financial industry: when AI makes decisions, who is ultimately responsible? A recent article published by the CFA Institute examines this issue, highlighting how the growing autonomy of algorithms is reshaping accountability in investment management.
AI systems are now widely used across the investment lifecycle - from portfolio construction to execution and risk management - bringing efficiency, speed and data-driven insights. However, their increasing complexity and opacity raise critical concerns. Many models operate as “black boxes,” making it difficult to fully understand or explain how decisions are generated.
This creates a tension at the heart of modern finance. While technology can automate processes and enhance performance, responsibility cannot be delegated in the same way. Investment decisions - even when supported or executed by AI - remain subject to professional standards, fiduciary duties and regulatory oversight.
The CFA Institute analysis emphasizes that accountability must remain firmly anchored in human actors. Investment professionals, firms and senior management cannot rely on the opacity or autonomy of algorithms as a shield. Instead, they must ensure that AI systems are designed, monitored and governed in ways that align with client interests and market integrity.
This requires a shift from viewing AI as a tool to recognizing it as part of a broader decision-making system. Governance frameworks must address the full lifecycle of AI models—from data inputs and model development to deployment and ongoing monitoring. Establishing clear lines of responsibility within this process is essential, particularly as systems become more adaptive and less predictable.
Ethical considerations are central to this discussion. Issues such as bias, transparency and explainability are not merely technical challenges, but professional obligations. Investment practitioners must be able to justify decisions, communicate risks to clients and ensure that outcomes are fair and consistent with fiduciary duties.
The article also highlights the importance of maintaining meaningful human oversight. Even as AI systems take on more complex tasks, human judgment remains critical—both to validate outputs and to intervene when models behave unexpectedly or when market conditions change rapidly. Overreliance on automated systems can introduce new forms of systemic risk, particularly in fast-moving or stressed market environments.
For the investment profession, the implications are significant. As AI becomes more pervasive, accountability frameworks must evolve to reflect the hybrid nature of decision-making, where humans and machines operate together. This includes strengthening internal controls, enhancing model governance, and ensuring that professionals have the skills and understanding needed to oversee increasingly sophisticated systems.
For members of CFA Society Italy, the message is clear: technological innovation does not diminish responsibility - it amplifies it. In a world where decisions can be generated at scale and speed, the role of the investment professional becomes even more critical in ensuring that those decisions are aligned with ethical principles, regulatory standards and the long-term interests of clients.
Ultimately, the question is not whether AI can trade - it already does. The real question is how the industry ensures that, behind every algorithmic decision, there remains a clear and accountable human framework.