French start-up Mistral was billed to lead Europe’s charge in the global AI market — until last month, when it sealed a partnership with Microsoft. Hopes of the EU regulating the sector are crashing up against the sheer power of monopolies.

The rosy, David-versus-Goliath story surrounding Mistral AI’s rise was undercut in late February when it was announced that the firm sealed a partnership with Microsoft. (CFOTO / Future Publishing via Getty Images)

Valued at €2 billion, French start-up Mistral AI has been billed as one of Europe’s great hopes in the escalating commercial battle over artificial intelligence (AI). Can the Paris-based outfit — founded in April 2023 by three former researchers at Google and at Facebook’s parent company Meta — find a place in a market carved up by the Silicon Valley giants? Alongside German company Aleph Alpha, Mistral has caught the eye of key investors on both sides of the Atlantic, hauling in upward of €500 million in last year’s funding rounds from the likes of veteran venture capital firm Andreessen Horowitz or French billionaire Xavier Niel. Headed by the photogenic Arthur Mensch, a graduate of the elite École Polytechnique, the company was one of France’s star guests at this winter’s rendition of the World Economic Forum in Davos. Has Emmanuel Macron’s “start-up nation” finally found its champion?

Perhaps not. The rosy, David-versus-Goliath story surrounding Mistral’s rise was undercut in late February when it was announced that the firm sealed a partnership with Microsoft. The US behemoth already has a commanding stake in the burgeoning AI industry through its $13 billion investment in OpenAI, whose ChatGPT is already positioned as the dominant platform in the field of generative, general-purpose AI. Valued at just $16 million, Microsoft’s capital infusion in Mistral is tiny by comparison, but it points to a pattern that has upset European efforts at cultivating autonomous tech hopefuls. For these latter inexorably feel the gravitational pull of the backlog of investable capital held by US firms — and the technological and distribution infrastructure that businesses like Microsoft can offer.

Max von Thun, Europe director at the Open Markets Institute, told Jacobin that the new partnership between Microsoft and Mistral AI is symptomatic of the “huge structural concentration that you see in the tech sector, which is not new, which has been around for a long time, but which has basically put the big tech companies in a position to essentially co-opt or neutralize any potential players in AI who might challenge them directly.”

Mistral has built its identity around its open-source model that can be modified and adapted by clients. What it stands to gain from its partnership with Microsoft is access to the latter’s enormous computing power and key position in market infrastructure.

“Here’s the catch: I can build an open-source model, but the challenge is to get it to the market and to the customer. As a company, that is what I care about,” Kris Shrishak, senior fellow at the Irish Council for Civil Liberties, told Jacobin. “Distribution is a problem because they’re still a business. They need to make money. Microsoft gives them a pathway to that, by integrating it and offering it on their Azure marketplace.”

“If you want to train a cutting-edge model and if you want to have the scale to commercialize it quickly, then you kind of have no choice but to sign a deal with one of these companies,” Thun told Jacobin, referring to the commanding position enjoyed by Silicon Valley firms. “That kind of concentration upstream in cloud computing and in semiconductors is really where the problem is. And these types of deals will just keep happening until we confront that.”

The February 24 announcement of the Microsoft-Mistral partnership has caused a stir among antitrust and technology circles in Brussels. For EU authorities, grappling with the structural power of non-European big tech has indeed become a growing preoccupation in recent years. Shortly after Mistral’s deal with Microsoft went public, the European Commission stated that it would examine the new partnership.

PULL: For EU authorities, grappling with the structural power of non-European big tech has become a growing preoccupation in recent years.

But observers and anti-monopoly activists are skeptical of the potential scope of any investigation, to say nothing of the possibility that the EU will confront Microsoft when push comes to shove. “What was reported as an investigation was a bit of an overstatement,” says Thun. “There’s no formal investigation into the partnership. Basically, the commission is doing a wider consultation on competition and generative AI, which is just an information gathering exercise.” Any examination of this deal is likely to be folded into the EU’s inquiry announced in January into Microsoft’s stake in OpenAI.

Blind Eyes

Mistral’s rapprochement with Microsoft leaves a bitter aftertaste for those who’ve followed the European debate over AI. In the lead-up to the adoption of the EU’s AI Act in December, the firm was held up by industry lobbyists and leading member states like France to argue against the push to regulate AI by providing human rights protections to buttress the expansion of the technology. Doing so, and adhering excessively to what lobbyists and observers call the “rights-based” approach enshrined in Europe’s past digital regulation drives like the General Data Protection Regulation statutes, would again load European businesses with an unneeded burden in the already-challenging digital battlefield. Making up for Europe’s relative technological lag — and not missing what many have marketed as the next great technological revolution — required turning a blind eye to protections for fundamental rights.

The “risks-based” approach that infuses the European Union’s AI Act, which emerged from the trialogue negotiations in December between the European Council, the European Parliament, and the European Commission, was a product of this shift in political priorities. As the debate over AI heated up, it has become a proxy for concerns about the decline of EU industry relative to the United States. The result is a relatively watered-down piece of legislation, riddled with loopholes and largely toothless despite the potential scope of rights abuses made possible by the dissemination of AI. The AI Act was approved by the European Parliament on March 13, preparing the legislative package for adoption.

Only the most egregious applications for AI — those in the highest, “unacceptable risk” category like social ratings, biometric categorization, and emotional recognition applications at work or school — are banned for private purposes. Otherwise, the act prioritizes transparency, namely through publicizing the activities and services created through AI, or conformity assessments for “high-risk” activities such as AI’s application for medical procedure or infrastructure management.

“It was really a concession to industry from the get-go,” says Daniel Leufer, senior policy analyst at the digital rights advocacy Access Now, of the act’s “risk-based” approach. “Fundamentally, what the AI Act does is regulate a discrete set of high-risk use-cases of AI. Those uses of AI are subject to obligations, but other ones are not. It isn’t focused on affected people. You’re not giving people certain rights. It’s just that there’s transparency and responsible development practices, etc. mandated for a list.”

‘[The AI Act] isn’t focused on affected people. You’re not giving people certain rights.’

The devil is in the details with a technology that involves the mass processing of data, and that can be shown to handle differently — and with varying degrees of accuracy — information drawn from certain categories of the population. “There’s issues with having an accented list,” Leufer continued. “What about if multiple low-risk systems combine to create risk? Or what about if a system is only risky for 1 percent of the population?”

These weaknesses are especially pronounced when it comes to the use of AI for security and policing, with the AI Act essentially rubber-stamping a leap toward police application of these technologies. Thanks to the concerted lobbying of the defense-tech industry and officials in Europe’s security apparatus, states are exempted from the modest safety controls in the tiered, risk-based system — and the bans on the most threatening uses of AI that apply to private actors.

“If you think about policing and migration and other state authorities subject to this law, we traditionally have rights vis-à-vis those entities. We have procedural rights. We have rights to information. We have rights to the presumption of innocence,” says Ella Jakubowska, head of policy at European Digital Rights. “But the sense that we get from the AI Act is that if you add an AI layer over those state actions, those same principles no longer apply. The AI Act is not demanding and ensuring that we will have transparency over the decisions that are made about us, that we will have a right to that information, that we will be free from arbitrary or unfair or biased decisions resulting from these systems.”

The rights observers and lobbyists that Jacobin spoke to lay special emphasis on the French state’s lobbying activities in carving out exceptions for policing and security purposes. And while Mistral’s ambition to compete in the field of general-purpose AI sets it apart from these more specific functions, the company is characteristic of the political lobbying clout of European and American tech firms in the drafting of the new regulatory framework.

Mistral has benefitted from a very close relationship with the French government. One might even say that the firm is wedded at the hip to what the French like to call la Macronie — the web of individuals who rose alongside and revolve around the current president. Having served as treasurer for Macron’s start-up En Marche! party in 2016, Cédric O joined government as Macron’s state secretary for the digital transition, a post he held between 2019 and 2022. Rights activists and NGOs were therefore shocked to see that O, having put aside his ministerial hat, joined Mistral as a lobbyist in spring 2023 and was quickly hard at work directing industry efforts to shape the direction of the AI Act. He was downright apocalyptic about the legislation as it was being prepared last year, telling L’Opinion in June that the AI Act would mark “the death sentence of artificial intelligence in Europe.”

On March 13, the Commission on AI issued its report to the French government, recommending the creation of a €10 billion public fund, the deployment of AI in public services and education, and facilitated access to personal data to train AI.

On March 13, the Commission on AI (in which both O and Mensch participated) issued its report to the French government. Heralding an “unavoidable technological revolution” and warning that “if we do nothing, we’ll miss the train,” the report’s policy recommendations include the creation of a €10 billion public fund, increased investments in semiconductors, the deployment of the technology in public services and education, and facilitated access to personal data to train AI.

Mistral did not respond to a request for comment on the AI Act, its new partnership with Microsoft, and allegations that O’s role with the firm violated French anti-corruption laws, namely the requirement that he not lobby former government colleagues until 2025. He was present at an AI roundtable at the prime minister’s office in September and attended the Bletchley AI Safety Summit in the United Kingdom last fall alongside former colleague and finance minister Bruno Le Maire. To Mediapart last December, O maintained that he “scrupulously respects the obligations required by the law.”

Trojan Horse

Protecting young European start-ups like Mistral was little more than anti-regulatory blackmail to fight back the push for serious rights protections in the AI Act. “I’ve heard EU policymakers say that we absolutely need EU start-ups that can bring innovative technology,” Leufer told Jacobin. “And I’ll respond: there are loads of them, they just get bought up by big US tech companies who absorb the IP and dissolve the company.”

“There was a collective eye roll in Brussels civil society,” says Jakubowska of the reaction in lobbying networks to Microsoft’s deal with the French firm. The rapprochement is a textbook example of how the storytelling about fostering European innovation serves as a Trojan horse for low-impact regulation with the bitter end result being just what those watered down regulations were meant to prevent: the tentacular entryism of dominant US tech firms.

Many likewise wonder whether Mistral was already in talks with Microsoft during negotiations over the AI Act in December. While the deal is still only valued at a modest $16 million capital infusion to be converted into equity at the next funding round, it could preface a more aggressive move into Mistral and the broader AI sector in Europe, already at such a steep structural disadvantage to US capital.

What should be obvious is that European technology does not suffer from an overly aggressive “rights-based” approach toward regulating new digital products. Rather, what’s hamstringing European technology is its junior position to Silicon Valley, whose firms enjoy unmatched capital firepower and an entrenched position in market infrastructure. Thorough enforcement of antitrust law and a more aggressive industrial policy is what’s really needed if policymakers in Brussels are serious about fostering European platforms that could hold their own vis-à-vis the United States.

It has its limitations, and is woefully understaffed compared to the lobbying power of the largest technology firms, but Europe does boast an expanding policy arsenal for eroding big tech’s power. Currently being rolled out, the Digital Markets Act (DMA) targets so-called gatekeeper firms (companies with over €75 billion in market capitalization or forty-five million monthly European users). Those accused of abusing market power can, in the most egregious cases, see upward of 10 percent of their global revenue claimed in fines. In its current form, the DMA is largely aimed at regulating the big tech platforms’ internal app markets and would need to be updated to encompass activities in cloud computing and AI that are currently excluded from the legislation.

“The DMA definitely has the potential to be really useful in preventing big tech’s domination of AI. The problem is it was designed and legislated before the whole debate on generative AI had started,” says Thun, who argues that Brussels shouldn’t shy away from a more direct confrontation with the Silicon Valley giants. “Europe is a very important market for them. Outside of the United States, it’s their most important market, especially in a world where things are becoming more decoupled, and you’ve increasingly got a Chinese ecosystem and a Western one. So, I don’t think they can afford to pull out of Europe. When they talk about that, they’re bluffing.”

Calling that bluff will take a degree of political will that’s rarely on offer in Brussels, however. Or perhaps a different set of priorities, which for now are aligned behind what’s good for Mistral — which does not differ greatly from what’s good for Microsoft, for that matter. March 7 was the DMA’s so-called compliance day, when market gatekeepers are expected to start reporting progress on curtailing monopolistic practices.

Leave A Comment