DOCS
Documentation
ZONN.ai is an AI content detection platform that helps you determine whether text, images, or social media posts were generated by AI or created by a human. It combines multiple detection models into a single ensemble analysis, gives you a confidence score for every scan, and wraps it in a community where people can discuss and vote on borderline content. This guide covers everything you need to get the most out of the platform.
On this page
Quick Start
The fastest way to get a detection result is through /tools — the standalone analyzer. No account required for a basic scan.
Step 1 — Open the analyzer
Navigate to zonn.ai/tools. The scan hero at the top of the page is your main entry point for all detection types.
Step 2 — Submit content
You have three input options:
- Paste text — paste any written content directly into the text field. Works for articles, essays, emails, social captions, and code comments.
- Upload an image — drag and drop or select a JPEG, PNG, or WebP file for image analysis.
- Enter a URL — paste a link from Twitter/X, Reddit, or Instagram. ZONN.ai will extract the content automatically and run the appropriate detector.
Step 3 — Read the analysis
After a few seconds, you will see a results page at zonn.ai/zonned/[id]. The page shows the ensemble ZONN Score, a confidence label, and individual signals from each model that participated in the analysis. Every result has a shareable permalink.
Tip: Results are permanent and shareable. You can copy the zonned/[id] URL and send it to anyone — no account needed to view a result.
Text Analysis
Text detection identifies linguistic patterns, statistical regularities, and stylistic fingerprints that correlate with AI-generated writing. Large language models like GPT-4, Claude, and Gemini tend to produce text with characteristic distributions of word choice, sentence structure, and perplexity that differ measurably from human writing — especially at scale.
How detection works
When you submit text, ZONN.ai routes it through four specialized ONNX models running in parallel on the detection service. Each model was trained on different datasets and uses a different architecture — some focus on token-level perplexity, others on stylometric features, and others on semantic coherence patterns. The results are aggregated into a single ensemble score using a weighted combination of individual model confidence values.
Minimum length
For reliable results, aim for at least 50 words (~300 characters). Short snippets — a sentence or two — do not give the models enough signal to distinguish AI from human writing with meaningful confidence. Very short inputs will return a low confidence score regardless of origin.
Best inputs for text detection
- Blog posts, articles, and long-form essays
- Academic writing and research summaries
- Product descriptions, marketing copy
- Social media captions extracted from posts
- Email newsletters and customer support responses
- Code comments and documentation prose (not raw code)
Note: Heavily edited AI text — where a human has substantially rewritten the output — will score lower on the AI scale. Detection works best on unmodified or lightly edited content.
Image Analysis
Image detection identifies artifacts, frequency anomalies, and learned feature patterns that differentiate AI-generated images from photographs or traditionally edited images. Diffusion models (Stable Diffusion, Midjourney, DALL-E) and GANs leave behind characteristic traces at the pixel level that trained models can detect.
Supported formats
- JPEG / JPG — most common format, works well unless heavily compressed
- PNG — lossless, ideal for preserving frequency-domain artifacts
- WebP — modern format with good fidelity, fully supported
Detection approaches
ZONN.ai runs up to seven image models across two detection services:
- Frequency analysis — examines the Fourier spectrum of the image for grid-like artifacts introduced by upsampling and decoding steps in diffusion models (Fourier detector).
- Learned CNN features — convolutional networks trained on large datasets of real vs. AI images, detecting texture and semantic anomalies invisible to the human eye (GramNet, NPR).
- Reconstruction error — RIGID measures how well an autoencoder can reconstruct the image; AI images tend to reconstruct differently from photographs.
- CLIP / SigLIP embeddings — vision-language models that understand the semantic content and can flag visual–textual inconsistencies common in AI generations.
- Community forensics features — CommFor extracts handcrafted forensic features from image metadata and color histograms.
Limitations
Image detection accuracy degrades in specific conditions you should be aware of:
- Heavy JPEG compression — repeated save/re-encode cycles destroy the frequency artifacts that detectors rely on.
- Screenshots — screenshotting an AI image adds screen-space rendering layers that confuse frequency-based detectors.
- Heavy filters or edits — applying Instagram-style filters, face-retouching, or heavy post-processing can mask AI artifacts.
- Very small images — images smaller than approximately 256×256 px may yield low-confidence results due to insufficient spatial information.
Tip: For best results, use the original image file rather than a downloaded or screenshot version. Original files preserve the pixel-level artifacts that detectors look for.
Understanding Scores
Every analysis produces a ZONN Score and a set of individual detector signals. Together they give you a complete picture of what the models found — and how confident they are.
The ZONN Score
The ZONN Score is a single number from 0 to 100, where higher values indicate stronger AI likelihood. It is computed as a weighted ensemble average of all participating model scores. Models with higher historical accuracy receive more weight in the final calculation.
- 85–100 — Very likely AI-generated. Most signals point to AI generation. Detection tools are not perfect — treat this as a strong indication, not a verdict.
- 61–84 — Probably AI-generated. More signals lean toward AI generation than real, but some give weaker readings. Treat with caution.
- 40–60 — Inconclusive. Signals are mixed or weak. We can't tell with confidence — context, source, and your own judgement matter here.
- 16–39 — Probably real. More signals lean toward real than AI, but some give weaker readings. Worth a second look on close inspection.
- 0–15 — Very likely real. Most signals point to a real, human-captured source. Detection tools are not perfect — treat this as a strong indication, not a verdict.
AI detection is probabilistic, not definitive. Use this score as one input among many.
Individual detector signals
Below the ensemble score, the results page shows each model's individual reading. A signal includes the model name, its raw score, and whether it voted AI or Human. Seeing all signals helps you understand when models disagree — for example, if three models say AI at 90% and one says Human at 30%, the ensemble will lean heavily AI but the disagreement itself is informative.
Why no detector is 100% accurate
AI detection is inherently probabilistic. Detection models are trained on existing AI systems and may not generalize perfectly to newer models, fine-tuned variants, or hybrid content. Human writing that happens to be unusually formulaic — a legal template, for instance — can score higher than expected on the AI scale. Conversely, high-quality AI writing that closely mimics human style may score lower. Use ZONN scores as a strong signal, not a definitive verdict. For high-stakes decisions, combine the score with contextual evidence.
ZONN.ai scores are probabilistic estimates. They are not admissible evidence and should not be used as the sole basis for consequential decisions such as academic integrity proceedings.
Community Features
Beyond the standalone analyzer, ZONN.ai has a Reddit-style community where members post content — text, images, or social media links — and the community collectively evaluates whether it is AI or human-made. This surfaces interesting edge cases, creates a record of notable AI detections, and lets you get a second opinion from real humans.
Posting content
To submit content to the community, navigate to /submit. You will need to be signed in. Provide a title, choose a content type (text, image, or link), and paste your content. Once submitted, the post appears on the community feed and is automatically analysed by the detection engine.
AI / Real voting
Instead of upvotes and downvotes, ZONN.ai uses a binary AI / Real vote on each post. Every signed-in user can cast one vote per post. The community vote tally is shown separately from the machine detection score, making it easy to spot cases where humans and the models disagree.
Commenting and discussion
Each post has a threaded comment section. Use comments to share evidence, explain your reasoning, or link to related sources. Comments support nested replies, and community members can vote on comments to surface the most insightful observations.
Good Post recognition
Community members can mark a post as a "Good Post" to recognize submissions that are particularly interesting, well-sourced, or illustrate a notable detection case. Good Post marks are tracked separately from AI/Real votes and contribute to a poster's reputation.
Leaderboard
The leaderboard ranks community members by their detection accuracy and engagement. Build your score by posting quality content, voting correctly, and earning Good Post marks from other members. Top detectives are featured on the community leaderboard.
Community feed tabs
- / — Hot feed. Recent posts ranked by combined AI/Real vote activity and community engagement.
- /new — Latest submissions in reverse chronological order.
- /top — All-time top posts by Good Post recognition and community vote count.
Browser Extension
The ZONN.ai browser extension brings AI detection directly into your browsing experience. Instead of copying content and switching to the web app, you can trigger a scan from anywhere on the web without leaving the page you are on.
Supported browsers
- Google Chrome — fully supported. Available for installation from the Chrome Web Store.
How to use it
The extension provides two main interaction patterns:
- Highlight and right-click — select any block of text on a webpage, right-click to open the context menu, and choose Check with ZONN.ai. The extension sends the selected text to the detection engine and displays the result in a popup.
- Extension icon — click the ZONN.ai icon in the browser toolbar to open the full scanner popup, where you can paste text or a URL and run a scan without leaving your current tab.
The extension sends content to the ZONN.ai backend for analysis. Review content before submitting if it contains sensitive or confidential information.
Game Mode
Games is an interactive game that tests your ability to distinguish AI-generated content from human-made content — before the model gives you the answer. It is a fun way to calibrate your own instincts while contributing to the community's understanding of what makes AI content detectable.
How it works
Each round presents you with an image, artwork, or illustration. Your job is to classify it as either AI Made or Real before the timer runs out. After you vote, the correct answer is revealed along with detection model details. Work through as many rounds as you can in a session.
Scoring
Your score accumulates across correct guesses. Consecutive correct answers build a streak multiplier — the longer your streak, the more points each correct answer is worth. A perfect session earns you maximum cred on the platform. Results contribute to your community profile and the global leaderboard.
Round types
- Image rounds — photographs, digital art, and illustrations sourced from curated datasets of known AI and real images.
- Text rounds — passages of text where you must guess whether a human or an LLM wrote it.
Tip: Pay attention to subtle details — fingers with unusual joints, backgrounds that blend oddly, or text in images that is almost legible. These are common tells for AI-generated images.
API AccessSOON
A public REST API for programmatic access to the ZONN.ai detection engine is on the roadmap. The API will allow you to submit text and images, receive structured JSON responses with ensemble scores and individual model signals, and integrate AI detection directly into your own applications, pipelines, or browser extensions.
Planned capabilities include:
- POST /v1/analyze/text — submit raw text and receive an ensemble AI likelihood score
- POST /v1/analyze/image — submit an image file or URL for multi-model image detection
- GET /v1/results/[id] — retrieve a previously cached analysis result
API access is not yet available. If you are interested in early access or have a use case you would like to discuss, reach out through the community or via the contact details in the footer.