
T3CON24 Recap - How to Leverage AI in a Competitive Business World
T3CON24 Recap— How to Leverage AI in a Competitive Business World
Since the launch of ChatGPT in 2022, the topic of AI has been omnipresent. Artificial intelligence is changing fields ranging from chemistry to economics, and automating decision-making in our daily lives. But from a business perspective, muddled definitions of what AI is — and what it’s capable of — are making it challenging for stakeholders to understand AI’s full potential, and how it can best be integrated into their business practices. So, how can you leverage AI to stay competitive without getting lost in the hype?
In his talk at T3CON24, Prof. Dr. Patrick Glauner provided an overview of the history of AI, what we mean when we talk about AI, the landscape of compliance, and how best to implement AI projects.

About the Speaker
AI, Past and Future
AI is everywhere, and many are familiar with its present impact — it was one the key topics at the 2024 World Economic Forum Annual Meeting (where Glauner himself was a speaker). The 2024 Nobel Prize in Physics honored two pioneers in artificial neural network research, and the Chemistry prize was awarded to a team using AI to predict complex protein structures. Yet, despite its prominence, confusion persists about what AI truly entails.
In a sentence, Glauner defines AI as “automating human decision-making”. He highlighted that certain studies say that humans make up to 30,000 decisions a day, and that the goal of AI is to make such decisions faster, better, and cheaper.
As a research field, AI dates back to the 1950s. The year 1955 is often cited as its starting point, marked by a proposal for a summer seminar at Dartmouth College the following year. The proposal asked several key questions: How can machines learn? How can we “teach” machines to work with human language? “They felt all of this would be solved by the following summer,” Glauner joked. “Then they thought the field of AI would solve these questions — but AI isn’t that.”
Glauner highlighted that we’ve actually been dealing with “AI” applications for decades, but it wasn’t until the recent boom in generative AI that many people realized just how far the field had come. Tools like text-to-image generators such as DALL-E, often used to create fun graphics, have made AI more visible. Yet the fundamental machine learning methods behind these tools have been part of daily life for years. Examples of automation we encounter every day include:
- Optical character recognition, e.g. parsing text from scans or images
- Face recognition
- Spam filtering
- Credit card fraud algorithms
- Recommender systems
- High-frequency trading
AI has already had a tangible impact on our lives, and the leap in generative AI capabilities has only accelerated this trend. But what exactly is AI?

GPTS, Neural Networks, and Transformers: Understanding AI
To explain the methodology of AI models, Dr. Glauner used the example of building a machine to translate German to English. With an “expert system”, you would talk to professional translators, and attempt to distill their knowledge into a set of repeatable rules that you could turn into code. However, this approach has its limits — language systems are vast and complex, they change over time, and grammar rules are often ignored or subverted in human communication.
That’s where machine learning comes in. With unsupervised machine learning, developers don’t hard-code rules; instead, they provide the system with examples, allowing it to use statistical methods to infer patterns and rules from the data.
The technology behind the sophisticated AI products of today is a type of machine learning called artificial neural networks. “There’s a lot of misinformation on the internet claiming that artificial neural networks work like the human brain,” Glauner commented. “They don’t. They’re loosely inspired by how the human brain works.”
Glauner showed an example of a typical neural network architecture, with interconnected nodes (or neurons) arranged into layers. On the far left of the diagram was the input layer, where the original data is fed into the network; this data can be text, for example, or images. This is followed by several hidden layers of neurons that connect and feed information to each neuron in the following layer. These connections — represented in the diagram as black lines — are known as weights or parameters, and they’re essentially numerical values which determine how much influence a certain node should have over its successor. Much of the work of neural networks is fine-tuning these parameters, to lead to the most accurate predictions.
As the layers increase, so does the complexity of features the neurons are able to recognize. Glauner’s example came from a network which Google used to detect cat videos. “The first few layers detect very simple stuff in the images, like edges, changes of intensity,” he explained. “The next layers detect groups of edges — which is where we start to get predictions about whether or not a face is feline — and later layers recognize groups of groups.” These models proved incredibly useful for advancing text and image classification.
A further breakthrough in 2017 profoundly advanced machine learning’s capabilities for natural language processing (NLP). In a paper titled “Attention is All You Need”, a research team from Google proposed a new architecture called a transformer. Transformers use an “attention mechanism”: a statistical method which is able to model how words correlate with a much higher level of accuracy than other network architectures. “The text is broken down into tokens; a small word can be a token, a big word might be broken down into multiple tokens,” Glauner explained. “We look at these tokens in combination to determine which word is likely to follow a specific word.” This model proved to be useful far beyond language processing; as Glauner highlighted, the acronym in Chat-GPT stands for Generative Pre-Trained Transformer.
When (Not) to use AI
After providing a grounding in the mathematics and technical architectures behind AI models, Glauner moved on to discuss the opportunities — and risks — of AI projects across industries.
“The sad truth is, 80% of AI projects fail,” he noted. “If you want to work effectively with AI, you need an AI strategy.”
Glauner recommended a multi-stranded approach:
- Educate all employees on what AI is and what it can do. The people who will best determine what processes can be automated on a factory floor are factory workers. Educate employees at all levels of your business about what AI can do, and allow them to make suggestions on how it can best optimize their work.
- Garbage in, garbage out. Glauner estimates that for successful AI projects, 80% of the time is spent collecting, aggregating, and cleaning data. High-quality data and the right infrastructure are both vital for good outcomes.
- Use case first. Glauner warned against “doing AI” for the sake of it; start with a concrete, tangible problem, and work from there.
- Start with low-hanging fruit. Find some easy ways to embed AI to optimize your internal work processes. Less critical projects provide a good test ground, and successes here will generate buy-in for more complex, ambitious AI initiatives further down the road.
- Focus on adding value. Customers won’t pay more for your product just because it “uses AI”. Determine how AI can generate real value, and communicate the benefits to your client base.
- Use quantitative metrics. “A lot of companies aren’t great at assessing how good or how bad they are in a specific task,” Glauner explained. If you want to know whether or not AI is saving time, cost, or improving accuracy/quality, start with quantifiable numbers that you can use to measure success.
- Human intelligence first. “If you can solve your problem with just two or three rules, please don’t train a neural network!” AI is applied best to that sweet spot where a problem or task is complex, but not hampered by too many mutual dependencies. Human discernment is crucial to applying AI where it will work best.

Compliance and Legal Challenges
Glauner highlighted a further challenge to successful AI cases: compliance. For businesses operating in the EU, compliance is about to become much more complex.
The EU Artificial Intelligence Act came into effect in August 2024. This law applies to all uses of AI within the European Union, and categorizes the applications into four levels of risk, ranging from “minimal risk” to “unacceptable”, each with different criteria for compliance. “Unacceptable” risk applications, such as social scoring by governments, are banned under the act. “High risk” indicates use cases where the health, wellbeing, and rights of individuals are contingent on the AI system working with a high-level of transparency, security, and accuracy. This includes self-driving cars, AI uses within the judicial system, and AI within education.
One of the key challenges of EU AI Act compliance is the complexity of the bill. The original bill was 120 pages long; the final bill clocks in at over 400 pages. Additionally, the first draft called for only completely error-free and bias-free datasets to be used in high-risk applications; it was decided this standard was unattainable. The subsequent drafts now require datasets to be as error-free and bias-free as possible, making it difficult for impacted companies to fully gauge compliance. Glauner stated that many organizations are currently developing industry standards to help make compliance more manageable. “Industry associations are currently building best practices. They’re developing one- or two-page checklists. The idea is, if you follow these, you could be very certain — albeit not totally certain — that you’re compliant. Right now, you have an opportunity to contribute to these standards,” he emphasized. “Developing high-quality, practical guidelines will avoid having to comply with 400 pages of text.”
Glauner also highlighted a legal AI issue particularly relevant to the content professionals within the TYPO3 community: copyright. In the U.S., a spate of key legal disputes against Microsoft and OpenAI (the creators of Chat-GPT) launched by authors and publications — including most notably, The New York Times — are likely to set important precedents about the “fair use” of copyrighted material in training AI algorithms. In Germany, Clause 44b of the Urheberrechtsgesetz prohibits “data mining” of material that is not publicly available. In the case of public works, such as those freely available on the web, there is a provision to reserve use in a “machine readable format”. What constitutes a machine-readable format is open to legal interpretation; Glauner recommended “adding something to your robots.txt, or a comment in your HTML that your content is not allowed to be used for training a machine.”
The Global Perspective on AI
Glauner concluded his talk by zooming out from the perspective of individual companies to examine AI on a global scale. He shared that he’s spent a lot of time in recent years in China, a global leader in AI R&D. He recommended the book “AI Superpowers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee for understanding why China is excelling in AI innovation. The book proposes that a relatively open regulatory environment and central government investment are key factors in China’s AI development. Glauner also shared his experiences visiting Huawei’s Dongguan campus: a $1.5 billion R&D facility modelled after 12 European sites.
From a global perspective, AI has already begun transforming industries. Glauner recommended “Competing in the Age of AI: Strategy and Leadership When Algorithms and Networks Run the World” by Marco Iansiti and Karim R. Lakhani as a reference for how data-driven decision-making and automation have completely changed the economy in recent years. In terms of innovation, many think that Germany is currently lagging behind, but Glauner argues this is not the case. Germany ranked in the Top 10 of the International Finance Forum’s Global AI Complex Index (which Glauner co-authored). However, he also conveyed the need for further funding and development in Germany’s AI sector, particularly in venture capital for startups. “If we don’t do enough in AI, future innovations in key German industries— chemistry, mechanical engineering— will happen elsewhere,” he stressed.
Takeaways and Looking Ahead
AI is already leading breakthrough research developments and embedding itself into our daily lives. When it comes to commerce, AI can open up profound opportunities for businesses — but only if it’s used intelligently. As Glauner stated, “People are worried AI will replace them, and put them out of business — it’s actually their competitors who are leveraging AI who will put them out of business.” Harnessing the potential of automation requires understanding AI’s opportunities and limits, focusing on use cases with measurable outcomes, and staying ahead of compliance challenges — key factors that can provide businesses across industries with a competitive edge.
Did you enjoy this recap? If you would like to relive all the exciting moments from T3CON24, be sure to check our our recap of the entire conference!