Artificial Intelligence and Reality: Why We No Longer Know What's True
Artificial intelligence is profoundly changing our relationship with digital reality. AI is making it increasingly difficult to distinguish what's real from what's synthetic on the internet, directly impacting information, politics, advertising, and business.
All the remedies designed to identify what is real and what is generated by AI online are showing significant limitations. From the C2PA standard to the new European regulations, the picture is fragmented, and big tech companies have no real economic incentive to address the root cause of the problem. The result is that the burden of distinguishing what is authentic falls increasingly on users, companies, and society.
Despite this critical landscape, there are signs of hope. New technical standards, regulations like the AI Act and the Digital Services Act, and a different cultural approach to verifying sources can pave the way for a more trustworthy digital ecosystem. For those working in digital marketing and communications, understanding this transformation is now essential.
Artificial Intelligence and Reality: How AI Has Broken the Trust Pact with Images
For years, the internet has allowed us to stay informed in real time about what's happening around the world. This system, however, was based on an implicit assumption: images and videos, while manipulable, were largely a reliable documentation of reality. With generative artificial intelligence, this pact of trust has been undermined.
The invasion of AI content on social media and digital platforms is breaking the age-old bond between our eyes and the images they display. Hyper-realistic deepfakes, synthetic photos, cloned audio: anything can be created or altered in minutes, often without clear labels. The consequence is that we can no longer rely on "seeing is believing.".
The problem isn't just political disinformation or viral memes. All forms of visual communication are affected: documentaries, reports, advertising, branded content, and institutional propaganda. When even actors like the White House spread AI-generated or AI-manipulated images on social media without clearly disclosing it, it becomes even more difficult for the public to distinguish between documentation and staging, between reporting and storytelling.
C2PA Certificates and Anti-Deepfake Stamps: Why They're No Longer Enough
To address this problem, in recent years there has been a significant focus on labels and certificates of origin for digital content. The main attempt is the C2PA (Coalition for Content Provenance and Authenticity) standard, promoted by companies such as Adobe, Microsoft, OpenAI, Meta, and many others, with the aim of creating a system of Content Credentials.
C2PA works by recording a file's history in a metadata "manifesto": who created it, what device it was used on, what changes it underwent, and what tools were used. In theory, the ideal flow would be this: take a photo with a compatible camera or smartphone, open the image in enabled software (like Photoshop), each change is recorded in the C2PA manifest, the file travels with these credentials, and the platforms that publish it display an information panel listing the author, apps used, and the use of artificial intelligence tools.
Technically, C2PA is a sophisticated evolution of EXIF metadata, with the addition of cryptographic signatures to make manipulation difficult. The project, hosted by a foundation that claims over 500 companies are involved (including Google, Sony, the BBC, and Amazon), was born as a traceability tool for photographers, media, and creatives and was quickly proposed as a response to the invasion of AI content.
But artificial intelligence makes it clear that "stamps" aren't the only solution. C2PA, in fact, has two structural limitations: it only works well if the content creator wants to be tracked, and credentials are relatively easy to remove or corrupt during conversions, uploads, or reprocessing. Furthermore, the chain breaks at the crucial stage: distribution on social networks, video platforms, and messaging apps, where metadata compliance is highly uneven.
To delve deeper into the topic of deepfakes and disinformation, it is useful to consult the analyses of Wikipedia on the deepfake phenomenon and the reports on media and AI published by the’OECD.
Big tech, hidden watermarks, and divergent strategies
Managing the relationship between artificial intelligence and reality is currently fragmented, partly due to the differing strategies of major technology players. Google has developed SynthID, a "hidden" watermarking system that inserts signals directly into the content generated by its models (images, audio, video, text). It also offers a "Detector" and tools integrated into its Gemini assistant to check for the presence of SynthID watermarks.
Some Google products combine SynthID and C2PA metadata, but this doesn't solve the interoperability problem: other players are neither required nor technically equipped to read those signals. TikTok, on the other hand, announced in 2024 its adoption of C2PA to automatically recognize and label AI-generated content from other platforms and, in 2025, introduced an "AI-made" watermark and controls to reduce the presence of AI-generated content in its feed.
YouTube uses text labels such as "AI-generated content," but its application is patchy: many synthetic videos escape automatic flagging, and action often occurs only after specific reports. X (formerly Twitter) ended its collaboration with C2PA after its acquisition by Elon Musk, while the flow of synthetic content and misinformation has grown. Apple remains largely absent for now, with informal contacts with the C2PA coalition but no public announcements on standards or watermarks; meanwhile, many professional camera manufacturers (Sony, Nikon, Leica) have begun integrating Content Credentials only into their most recent models, leaving out a huge installed base.
The result is a patchwork: some advanced solutions, many gaps, and no uniform coverage. In the background, a clear conflict of interest: the same companies that profit most from AI also control the main content distribution channels and have little economic incentive to clearly label anything that might reduce the perceived value of content and AI investments.
Social perception, deepfakes, and the limits of transparency alone
Another crucial issue in the relationship between artificial intelligence and reality is the social perception of the "AI-created" label. For many creators, this label equates to a delegitimization of their work; for brands, it can suggest a low-cost output; for audiences, it often signifies less authentic and less human content. It's no surprise that many creators are irritated when a platform labels content as "AI" that, from their perspective, is simply retouched photos or videos edited with standard digital tools.
The line between "AI" and non-AI content is increasingly blurred. Smartphones automatically take multiple photos and merge them to achieve the best result; editing software uses neural networks for noise reduction, color correction, and automatic masks. A replaced sky, a softened face, a cleaned-up voice: where does generative AI really begin? Without a basic consensus on the level of algorithmic intervention that turns content into "AI-generated," any label risks being either too aggressive or too permissive.
Psychological studies also show that transparency alone is not enough. Researchers such as Simon Clark and Stephan Lewandowsky (University of Bristol) conducted experiments on over 600 participants in the US and UK: many continued to trust the content of a deepfake video even after being explicitly warned that it was fake. This suggests that the mere indication "this is a deepfake" does not negate the emotional and cognitive impact of the images, with important implications for lawmakers and regulators of online content.
The message emerging from various industry insiders is clear: a collective shift in mentality is needed. Faced with artificial intelligence, we must trust less the single frame and much more the context, sources, and cross-checks. A methodical skepticism that requires digital education but is increasingly essential for those in marketing, communications, and business.
AI Act, DSA, and new EU rules for labels and deepfakes
Europe has attempted to address the relationship between artificial intelligence and reality primarily through legislation, with two main instruments: the AI Act (EU Regulation 2024/1689, which entered into force in August 2024) and the Digital Services Act (DSA), which came into force in February 2024. The AI Act is currently the most advanced regulatory attempt globally to manage synthetic content.
Section 50(4) of the AI Act imposes a clear labelling requirement for audio, video, text, or image content generated or manipulated by AI that "clearly resembles existing persons, objects, places, entities, or events," with exceptions if it is clear from the context that the content is synthetic. Fines can reach up to €15 million or 3% of global turnover, but the obligation is "downstream": it affects those who distribute the content rather than those who generate it, often outside the EU or with open source models not subject to the regulation.
Articles 6 and Annex III classify certain uses of deepfakes as high-risk, such as for biometric identification or in critical contexts such as elections and justice, imposing strict documentation and traceability requirements. However, most "ordinary" deepfakes—memes, generic disinformation, non-consensual pornography—remain outside these more stringent categories.
The Digital Services Act adds a second layer: transparency requirements for AI-generated advertising, systemic risk assessments for Very Large Online Platforms (VLOPs), and notice-and-action mechanisms for content such as manipulated intimate images. Annual independent audits could become a tool to push platforms to adopt more uniform technical standards, including C2PA, watermarking, and AI-based forensic analysis tools.
However, a structural limitation remains: the AI Act and DSA primarily target "respectable" actors, while those who produce disinformation in bad faith, perhaps from non-EU servers, tend to deliberately ignore the rules. Without technical interoperability requirements, the risk is further fragmentation: each platform adopts its own labeling system, leaving users (and marketers) in a still-confusing ecosystem. For an updated analysis of European regulations, please consult the official website. European Commission on AI.
Artificial Intelligence and Reality: Impact on Marketing and Business
The new balance between artificial intelligence and reality isn't just an ethical or legal issue: it has a direct impact on digital marketing strategies, customer experience, and brand trust. Companies using generative AI to create images, videos, or copy must consider not only production efficiency, but also how these choices impact the perception of authenticity.
In a landscape dominated by synthetic content, brands that can transparently communicate their use of artificial intelligence while ensuring the traceability and verifiability of messages will have a competitive advantage. Consider political campaigns or crisis communications: a credible deepfake can damage a company's reputation in a matter of hours, and internal processes and monitoring tools are needed to quickly detect, report, and address false content.
For marketing automation, this means rethinking funnels and customer journeys. Messaging platforms, like WhatsApp Business, are becoming key channels for re-establishing a direct and verifiable relationship with customers. Video content viewed on social media can be questioned; a one-to-one conversation, tracked and managed with professional tools, can strengthen trust and conversions. Companies must integrate AI responsibly, balancing automation and human oversight, especially during sensitive communication phases.
Another front is segmentation: users increasingly exposed to misinformation and deepfakes are becoming more suspicious and selective. Content marketing strategies will need to focus on proof, verifiable case studies, traceable testimonials, and proprietary channels (newsletters, communities, direct chats) where brands can ensure greater reliability of information compared to the noise of open social media.
How SendApp Can Help with Artificial Intelligence and Reality
In this complex environment, companies need tools that help them manage digital communications transparently, traceably, and scalably. SendApp was created specifically to support businesses in the professional use of WhatsApp Business, responsibly combining automation, control, and integration with artificial intelligence.
With SendApp Official, businesses can use the official WhatsApp APIs to build reliable messaging flows, with approved templates and traceable communications. This helps create a direct and verifiable channel with customers, complementing public social media where deepfakes and uncontrolled synthetic content circulate.
SendApp Agent It allows you to manage team conversations on WhatsApp in an organized manner, assigning chats, monitoring performance, and integrating AI-powered automatic replies where appropriate, but always under human supervision. In an age where AI can generate messages, having a centralized control layer is essential to maintaining consistency and authenticity in customer relationships.
With SendApp Cloud, Companies can design advanced automations: segmented campaigns, nurturing sequences, post-sale reminders, and large-scale customer support. By integrating AI in a structured way, it's possible to automate microtasks (frequent replies, request routing), freeing up time for higher-value activities, such as checking sensitive content and managing reputational crises.
The goal is clear: to use artificial intelligence to improve efficiency and personalization, without losing control over the reality of the messages reaching customers. SendApp helps companies build more robust communication ecosystems, where labels, traceability, and human oversight work together. For those who want to rethink their WhatsApp Business strategy today in light of these changes, the next step is to request a dedicated consultation and start a free trial of the platform. SendApp, so as to field test automation, AI and new rules of digital communication.






