- Don't Fear AI
- Posts
- Over-Regulation in Europe and Apple Intelligence struggles
Over-Regulation in Europe and Apple Intelligence struggles
Over-regulation in Europe; Apple Intelligence system prompt; Course on building trustworthy AI; AI can't think

Don't Fear AI is here to demystify artificial intelligence! Our platform tackles common fears and misconceptions, offering clear insights into AI’s benefits, limitations, ethics, and security. Plus, we keep things fun with AI memes, making AI accessible and enjoyable for everyone.
Enter your email to stay informed and join the conversation!
What we have for you today:
Over-Regulation in Europe and Its Impact on AI Innovation
Apple's Struggle to Control AI with System Prompts: A Broader Industry Challenge
Learn to Build Trustworthy Machine Learning for Free with Stanford CS 329T
AI can’t think - Apple Researchers
Today’s meme

Over-Regulation in Europe and Its Impact on AI Innovation

In a recent interview, French President Emmanuel Macron highlighted the dangers of over-regulation and underinvestment, using the chemical industry as an example. His message holds strong relevance for the AI industry, which faces similar challenges in Europe. Macron emphasized that excessive regulatory frameworks could stifle competitiveness and hinder industries from scaling up or adapting in a rapidly changing global economy. For AI, the implications are profound.
The pace of technological advancement in AI demands a balance between regulation and innovation. However, stringent regulations, such as those outlined in the EU's AI Act, may slow down the development of AI solutions in Europe, leaving the continent at a disadvantage compared to less restrictive markets. For instance, smaller AI startups, which often operate with limited resources, may struggle to meet the extensive compliance requirements, inhibiting growth and innovation. Meanwhile, underinvestment in AI infrastructure, skills, and research further compounds these challenges.
Macron’s call for a simplification agenda underscores the need for policies that encourage, rather than inhibit, industry growth. In AI, this could translate to more flexible guidelines that allow for testing and adaptation, while still upholding ethical and safety standards. Additionally, fostering a supportive investment environment could empower European AI companies to compete globally.
In summary, striking the right balance between regulation and innovation is crucial for Europe to maintain a thriving AI ecosystem. By aligning regulatory frameworks with investment and support, Europe can enable AI to flourish and remain competitive in the global tech landscape.
Full article - https://www.bloomberg.com/news/videos/2024-10-02/macron-says-europe-is-over-regulating-under-investing-video
Apple's Struggle to Control AI with System Prompts: A Broader Industry Challenge

Apple Intelligence system prompt. Image source - Andrew Vassili
A leaked system prompt from Apple reveals several specific instructions designed to control AI behavior and ensure reliable, user-aligned responses. Here are the key directives mentioned in the prompt:
"Do not hallucinate." – Avoid generating made-up or false information.
"Do not make up factual information." – Responses must be grounded in factual accuracy.
"Present your output in a JSON format." – Ensure responses are structured and machine-readable.
"Only output valid JSON." – Prevents errors in applications that rely on structured responses.
"Limit the answer within 50 words." – Responses should be concise and to the point.
"Preserve the input mail tone." – The AI should match the tone of the user’s input.
"Help identify relevant questions." – Assist the user in identifying important questions from emails.
"Ask relevant questions based on the mail." – Only ask questions that directly relate to the content in the email.
These directives reflect Apple's attempt to guide AI responses and maintain consistency. However, they also highlight broader challenges in AI development. Relying on prompts alone is often insufficient to prevent issues like "hallucinations" or mismatches in tone and relevance. These challenges aren't unique to Apple; they’re common across the AI industry, as developers strive to control model behavior while ensuring adaptability and user trust.
Other article on the topic - https://arstechnica.com/gadgets/2024/08/do-not-hallucinate-testers-find-prompts-meant-to-keep-apple-intelligence-on-the-rails/#gsc.tab=0
Learn to Build Trustworthy Machine Learning for Free with Stanford CS 329T

Stanford's CS 329T course on Trustworthy Machine Learning is a free opportunity to learn cutting-edge techniques for building reliable and responsible AI systems. As AI becomes increasingly integrated into our lives, ensuring its trustworthiness is paramount. This course equips you with the knowledge and skills to tackle critical challenges in AI safety and ethics.
Here's what you'll learn:
Foundations of Trustworthy LLMs: The course kicks off with an introduction to trustworthiness in large language models (LLMs), covering key concepts, challenges, and evaluation methods.
LLM Tech Stack: You'll gain a deep understanding of the LLM technology stack, including RAG architecture, tools like LlamaIndex and TruLens, and techniques for evaluating RAGs.
Fine-Tuning LLMs: The course explores fine-tuning concepts and tools, preparing you to enhance LLM performance for specific tasks.
Evaluating Models and Apps: Learn how to assess the trustworthiness of AI models and applications across various dimensions, including truthfulness, safety, bias, fairness, robustness, security, privacy, and more.
Grounding and Factuality: Dive into techniques for ensuring LLMs generate grounded and factual responses. You'll explore RAGs, alignment methods, and fine-tuning strategies for factuality.
Verification and Guardrails: Discover methods to verify the grounding and factuality of LLM outputs, and implement guardrails to prevent undesirable behaviors.
Agents: Explore the world of AI agents, understanding their architecture, evaluation, and applications.
Confidence, Calibration, and Uncertainty: Learn about techniques for quantifying the confidence and uncertainty of LLM outputs, ensuring reliable predictions.
Explainability and Data Quality: Understand how to make AI models more transparent and interpretable. You'll also learn about the importance of data quality for supervised fine-tuning and reinforcement learning.
Multi-modality: Expand your knowledge to encompass multi-modal AI systems, going beyond text-based models.
The course features lectures, hands-on labs, assignments, and a final project, providing a comprehensive learning experience.
Don't miss this chance to gain valuable skills in building trustworthy AI systems and contribute to a more responsible future for artificial intelligence!
Link to the course - https://web.stanford.edu/class/cs329t/syllabus.html
AI can’t think - Apple Researchers

A recent study by Apple researchers raises critical questions about the capabilities of AI systems, specifically large language models (LLMs), in understanding abstract concepts and reasoning. Apple’s team observed “catastrophic performance drops” when these models attempted simple mathematical reasoning, especially when irrelevant details were included. The research highlighted that children could easily differentiate useful from misleading information, unlike AI, which often misunderstood and provided incorrect answers with misplaced confidence.
This study aligns with previous findings that AI systems don't genuinely "think" but rather mimic language patterns learned from training data. Experts like Gary Marcus argue that claims about AI intelligence are overstated, as LLMs lack the human-like abstract reasoning necessary for complex tasks. Melanie Mitchell, an expert in cognitive science, has also found significant gaps in these systems’ ability to solve analogy puzzles, something even young children grasp quickly.
Another concerning aspect of LLMs is their tendency to "hallucinate" or generate convincing yet entirely inaccurate statements. This flaw was notably seen in OpenAI's Whisper, a transcription tool used for sensitive tasks like court transcriptions, which occasionally inserted fabricated content into transcripts. These inaccuracies, if left unchecked, could lead to severe consequences, particularly in fields like healthcare and law enforcement where precision is essential.
The Apple study, led by Mehrdad Farajtabar, questions whether scaling data or computational power alone can solve these limitations, concluding that it likely cannot. These findings offer a necessary counter-narrative to marketing claims that depict AI products as foolproof, calling for cautious implementation in high-stakes environments.
Link to full article - https://www.latimes.com/business/story/2024-11-01/column-these-apple-researchers-just-proved-that-ai-bots-cant-think-and-possibly-never-will