The EU AI Act’s recent entry into force is more than just another regulatory milestone—it’s a call to action for all of us in the tech and investment communities. I’ve spent years working with entrepreneurs who push the boundaries of what’s possible with AI, and I believe this moment is pivotal. It’s an opportunity not only to adapt but to thrive in a new, regulated environment that prioritises safety, transparency, and accountability.
Rather than viewing compliance as a cumbersome obligation, I see it as a chance to stand out. The Act’s stringent requirements on ‘high-risk’ AI systems create a higher benchmark for safety and ethics in the industry. Companies that embrace these standards proactively will build trust with customers, partners, and regulators alike. In my experience, the firms that lean into these changes are often the ones that gain a long-term competitive advantage.
With nearly 1,000 stakeholders now shaping the first General-Purpose AI Code of Practice, we’re entering uncharted territory. As an investor, I’m keeping a close eye on this space, because it will directly influence how foundational AI models are managed and regulated. This is especially important for those of us with portfolios that include startups or scale-ups developing these models. Early engagement with policymakers is essential—we need to ensure the voices of innovative businesses are heard and incorporated into this evolving framework.
It’s easy to overlook the importance of civil society organisations (CSOs) in the regulatory process. The AI Act is unique in the way it empowers CSOs to advocate for public interest and hold companies accountable. Having witnessed the impact of the Digital Services Act, I know that CSOs will play a critical role in influencing how these regulations are enforced. For entrepreneurs, engaging with these groups isn’t just good PR—it’s a strategic move that can shape the narrative around your business and its impact.
For global players like Google and Meta, balancing compliance across different jurisdictions is already a monumental task. The Act’s nuanced approach—providing exemptions for open-source models while targeting those that pose systemic risks—will require agility and deep expertise. As investors, we must support our portfolio companies in building teams capable of navigating these complexities, ensuring they can adapt swiftly as regulations evolve.
The ongoing debate around the EU AI Office’s appointment of a Lead Scientific Advisor is a clear indication of the high stakes. Attracting top-tier talent is not just a concern for regulators—it’s a priority for every AI-focused business. We’ve seen how companies like OpenAI and DeepMind have succeeded by hiring world-class talent. I believe this will be a defining factor in determining which firms can truly lead in the race to achieve regulatory excellence and technological innovation.
As someone who’s always believed in the transformative potential of technology, I find this period incredibly exciting. The EU AI Act is not a roadblock; it’s a new playing field. It’s pushing us to think deeper about the societal impacts of AI and to build businesses that don’t just create value but also uphold values.
Lockheed Capital is more committed than ever to supporting founders who are ready to rise to this challenge—those who see compliance not as a checkbox exercise but as an opportunity to differentiate and drive long-term value.
Let’s talk about what this means for your business and how we can work together to navigate the evolving AI landscape. Whether you’re looking to de-risk, adapt, or lead the charge, we’re here to help you succeed.
コメント