How Contrails.ai is Fighting the Unseen Online Battle Against Deepfakes & Synthetic Manipulated Media

Imagine an explosive video clip landing in a journalist’s inbox. It could be a politician confessing to a crime they never committed, a beloved actor endorsing a cryptocurrency scam, or a trusted news anchor delivering a fabricated report. The voice is perfect, the lip movements are synchronized, and the facial expressions are utterly convincing. The publishing deadline is looming for the journalist. Run the story, and they could be breaking the biggest scoop of the year. Get it wrong, and their credibility and their publication’s trust is shattered forever.

This is the problem at the heart of the 21st century’s most insidious information war. It’s a war fought not with bullets, but with pixels on the screens in our pockets. The weapons are deepfakes, synthetic media meticulously crafted by artificial intelligence to deceive, disrupt, and destroy trust.

This is the high-stakes problem that Contrails.ai, a startup founded in 2023, has set out to solve. Co-founders Amitabh Kumar and Digvijay “Diggy” Singh are building what they describe as the essential safety infrastructure for the age of AI. They are the “brakes, the airbags, and the seatbelts for our digital lives.” Their mission is to detect deepfakes, prevent sophisticated scams, and arm organizations with tools to fight back against the rising tide of synthetic media.

In the heat of the 2024 Indian elections, their technology was quietly powering the editorial and fact-checking desks of major media houses like The Quint and Jagran, and organizations like the Deepfake Analysis Unit, providing a critical layer of defence against a tsunami of misinformation.

Today, their reach extends far beyond India, with their tools being used by fact-checkers across 180 different geographies and explored by content moderators in large enterprises. It’s a testament to the global scale of the problem they are tackling.

"Trust and safety is the cousin sister of cyber security," Amitabh explains, drawing a crucial distinction. "While cyber security is software versus software, trust and safety is where software goes against the human, or a human goes against another human using software. We take care of all the safety needs of users."

It is within this complex, human-centric battlefield that Contrails.ai is building its arsenal.

From a Thousand Questions to Building the First Product

Amitabh Kumar is a 15-year veteran in the trust and safety space with deep ties to the policy teams at Facebook, Twitter, and Netflix, and Digvijay “Diggy” Singh, a brilliant AI and computer vision researcher from IIIT Hyderabad. Amitabh’s deep understanding of policy and the human impact of technology, combined with Diggy’s formidable technical prowess, has created Contrails.

Their journey began over a LinkedIn message, a conversation that quickly evolved from a weekly advisory call to a full-blown partnership dedicated to solving one of the most pressing challenges of our time. As the 2024 elections loomed, the threat of deepfakes causing widespread chaos was no longer theoretical.

The initial spark came from an immediate, tangible need. Amitabh, through a serendipitous encounter with leading fact-checker Bharat Nayak, gained a front-row seat to the frustrations of journalists on the front lines who were encountering fake information.

"Bharat is a leading journalist and fact checker. He explained that Deepfakes are an emerging problem, and fact-checkers don't have tools to solve it. And they don't even have that much money."

This insight became the foundation of their product philosophy. The team, comprising Amitabh, Diggy, and Nayak, embarked on an intense, three-week deep dive, conducting research with 82 fact-checkers across India. It was a rigorous exercise in understanding the journalists’ world, who were dealing with fake information day in and day out. Diggy, the technologist, posed over a thousand questions, deconstructing the problem from every conceivable angle.

“The big part was, how will this person access a platform that can solve their problems?” Amitabh recounts Diggy asking them. “Is this going to go from phone, or is it going to happen on the laptop? How do they get the deepfakes? Is it coming via WhatsApp? Is it coming via a link? Is it coming via Dropbox? Because all of this changes the format of the file, the quality of the file, and the size of the file.”

This meticulous user research unearthed several critical pain points that existing solutions, mostly built in the West, had overlooked:

1. The Data Gap: Most detection tools were trained on American and European data, in English. They produced a high number of false positives when analyzing content featuring Indian faces and languages.

2. The Usability Gap: Available tools were often complex, requiring a medium level of coding knowledge to access open-source models on platforms like GitHub. The target user, a journalist, for example, could run a web browser, but anything more was a barrier to adoption.

3. The Speed Gap: Fact-checking is a race against time. Journalists operate under constant deadlines, needing answers in minutes, not hours.

4. The Explainability Gap: This was perhaps the most crucial insight. Existing tools would return a cryptic score or a simple “real” or “fake” verdict. This was useless for a journalist who needed to build a case and explain their findings to their readers.

Amitabh says, “Just a bunch of points or scoring means nothing. The fact checkers sought explainable evidence presented in a manner that was preferably in a graphical format, and clearly labelled as green, red, or orange, for use in journalism purposes. This was their ask.”

With this deep understanding, Diggy miraculously coded the first version of their product within a couple of weeks. It was a simple, intuitive drag-and-drop web interface. A journalist could log in, upload a file or paste a link, and the platform’s powerful GPUs or servers running on the backend would perform the cyber-forensic analysis.

By the end of March 2024, a beta version was in the hands of The Quint, giving them a powerful new weapon in the fight against misinformation just as the election cycle heated up.

Understanding Contrails’ Detection Engine

For product leaders building in the media space, the question of how to combat increasingly sophisticated generative models is paramount. How can a detection model keep up when creation models are improving exponentially? Amitabh offers a fascinating and clarifying perspective on the technological arms race.

He says, "There are two aspects of AI. One is generative. One is discriminatory. While generative video and audio models are becoming better, it’s crucial to understand that they are only becoming better for the human eye and the human ear. But our models are discriminatory and not generative. Meaning, they are computer vision and computer audio models that aren't looking for what a human sees."

He uses a brilliant analogy of ‘pixel’ carbon dating. A generative model builds an image pixel by pixel on a “sandy canvas,” but in doing so, it leaves behind a tell-tale digital signature. Some pixels are created before others, and to a computer vision model, this lights up like a heat map, revealing the artificial origin.

"We are essentially doing 'pixel carbon dating. Discrimination is easier than generation in theory. It's very easy to criticize something, but to create it is much, much difficult. Our job is of criticism. We just have to find the faults," says Amitabh."

Contrails’ detection engine is built on another fundamental truth: human uniqueness.

"We all have our own video DNA, our audio DNA, our movement DNA, and no two are alike. I can wave my hands 100 times, but not once, exactly the same. When the computer generates it, this movement will be exactly the same each time."

Contrails’ engine is trained to spot these subtle, inhuman consistencies that the human eye glosses over. When a user submits a video, the Contrails dashboard doesn’t just return a score. It provides a detailed report:

  • Multi-Modal Analysis: It breaks down the video and audio components separately, running them through a suite of specialized models.

  • Frame-by-Frame Forensics: It visualizes the analysis, showing a timeline of the video with color-coded segments indicating the probability of manipulation at any given second.

  • Face-Specific Detection: If there are multiple people in a video, the tool can identify which specific face shows signs of being deepfaked, often pinpointing unnatural lip-syncing.

  • Plain-English Explanations: The report includes written commentary that translates the technical findings into understandable insights.

    For example: “The video shows clear signs of AI manipulation. The unnatural lip movement on both sides indicates lip sync and AI manipulation,” or “The voice sounds very clear, however, the tone appears slightly monotonous and unnatural, indicating advanced AI voice cloning tools were used.”
This level of explainability is the core of their value proposition. It empowers a non-technical user with the confidence and evidence they need to make a high-stakes judgment call.

How User Feedback Forged Contrails

The Contrails platform that is available for the users today is the sixth version in just over a year. It’s a testament to the company’s relentless iteration cycle, driven entirely by user feedback. The early product was much simpler, offering a single probability score. But the users immediately started asking questions that shaped the product roadmap.

For example, in a video where two people are speaking, users demanded which person or section was a deepfake. They wanted to know which was real and which one was fake. This feedback led directly to the development of face-specific analysis by separating every frame.

Another user suggested that instead of showing a numerical frame number, they should use visual cues. “Instead of giving the number of the frame, give us the color of the frame. So that is visually appealing.” This seemingly small UX tweak was a huge leap in usability for their non-technical audience.

"Every feedback cycle is an upgrade where something very obvious has been missed," says Amitabh.

The process is a continuous dialogue, refining everything from the core detection models to the user interface. They are even contemplating a feature that would auto-generate the technical section of a fact-checking report, allowing a journalist to literally copy and paste it into their final article.

This rapid, user-centric development highlights a key advantage startups have over large corporations like Google, Microsoft, or Meta building tools internally to tackle the same issue.

"A big company will take six months, even in the first step of deciding what to build. The fact that our bootstrapped startup in the last 15-16 months has created six updates of its solution is an awesome place to be in. That’s the magic of why startups are better."

Go-to-Market: From Community to Enterprise

Contrails’ go-to-market strategy has been a story in two phases. Phase one was a community-led approach, driven by Amitabh networks, with help from fact checkers like Bharat Nayak. They offered their tool for free to fact-checkers during the elections. This was a brilliant strategic move.

Amitabh explains, “While they don't pay big money, they are people who provide us data from multiple countries. In deepfake detection, what gives you an edge is diverse data streams. We are already serving 180 geographies, which would not have been possible on a bootstrapped level."

This initial phase allowed them to battle-test their product, refine their models on a uniquely diverse dataset, and build a reputation within a key user community. But the long-term vision requires landing enterprise clients, the big tech platforms, BPOs, and content aggregators inside these companies, who moderate content at a massive scale. This makes phase two of their GTM a much more challenging endeavour.

Amitabh has been spending significant time in the United States, navigating the world of enterprise sales. He’s learned that the game has changed.

"Post-COVID, the GTM has completely changed to this relationship selling. People first want to get to know you. Then they want to get to know your product, and then they want to know whether you can service the product after the sale."

It’s a long, trust-building process, often requiring eight or nine conversations before an NDA is even discussed. The biggest sales challenge, however, is one of urgency. While deepfake scams and misinformation cause daily havoc, the regulations forcing companies to act are still in their infancy. Laws like the EU’s Digital Services Act (DSA) and the UK’s Online Safety Act are coming, but they provide grace periods for implementation.

"Think about regulations like taxes. You don't pay the tax till the last date. Trust and safety sales cycles are long, up to three to six months. But specifically in deepfake, because the laws are not implemented, the urgency is not there right now."

Contarils faces the classic innovator’s dilemma. Selling a solution to a problem that many potential customers know is critical, but haven’t been forced to prioritize yet. Contrails is playing the long game, building relationships and raising awareness, confident that as regulations tighten and the cost of inaction becomes undeniable, the market will come to them.

Beyond Deepfakes. Building a Safer Web

While deepfake detection is their primary market, Contrails’ ambitions are broader. They see a future built on “agentic workflows” with specialized AI agents designed to tackle specific, high-risk trust and safety problems.

The company is already developing models for detecting caste-based hate speech, a nuanced and deeply damaging problem specific to the Indian context that has been largely ignored by Western platforms. They are also working on a solution to detect Child Sexual Abuse Material (CSAM), with the goal of creating a system where no human moderator ever has to view such traumatic content again.

Another project is an automated content reviewer, an AI agent capable of watching and flagging disturbing content like beheadings and violence, protecting human moderators from psychological harm.

This vision positions Contrails not just as a deepfake detection company, but as a foundational trust and safety AI lab. Their North Star, as Amitabh describes it, is less a single metric and more a guiding principle of impact.

"In the 2024 election, we created a product which did 40 of the most prominent fact checks of this country. It was in the national news every day. Now that is a high you don't get with money or anything. India is forgotten in the trust and safety industry. Nobody cares about India. If we can create solutions that will make Indians and other people across the globe safer on the internet, then that would be our North Star."

Join ProdWrks Today!

Let’s join hands and build a network of brilliant product visionaries!

Enter your details to register

Enter your details to register

Enter your details to register