Demystifying Source Reliability: How to Ensure Credible CTI

In the world of cyber threat intelligence (CTI), analysts are constantly swimming in a sea of data. From dark web chatter and OSINT reports to premium threat feeds, the volume is overwhelming. But how much of it can you trust? How do you evaluate source reliability?

Evaluating source reliability is not just a best practice—it’s a fundamental necessity for any effective CTI program. Acting on flawed intelligence is just as dangerous as having no intelligence at all, leading to misallocated resources, a false sense of security, and ultimately, a failed defense.

This is where the art and science of assessing source reliability come into play. This guide will walk you through the core principles of intelligence evaluation, introduce you to the time-tested Admiralty Code for grading source reliability, and provide actionable best practices for building a more resilient and trustworthy CTI function.

Want to listen on the go? Check out this article in podcast form!


What is Source Reliability (and Why Does it Matter)?

At its core, intelligence analysis involves two critical and distinct assessments that determine overall quality and trust:

  1. Source Reliability: How trustworthy is the origin of the information? This evaluates the source itself—its history, biases, and overall competence. Strong source reliability indicates a history of accuracy and reliability.
  2. Information Credibility: How believable is the information itself? This evaluates the specific report or data point on its own merits—its logic, its internal consistency, and its corroboration.

A common rookie mistake—and a dangerous one—is to conflate the two. An analyst might see a report from a top-tier cyber security firm (a source with high reliability) and automatically assume the information within is infallible. 

Conversely, they might dismiss a critical piece of data from a new or anonymous source simply because the source’s reliability has not been vetted. 

Both approaches are flawed. To avoid this analytical trap and the bias it introduces, you must evaluate them separately. This disciplined separation provides a structured framework to cut through the noise and improve your threat prioritization.

So, how can you measure source reliability and information credibility? 


How to Measure Source Reliability and Information Credibility

Before using an intelligence source, you need a consistent method to measure its reliability and credibility.. This isn’t a gut feeling; it’s a methodical process based on a defined set of criteria. 

Measuring Source Reliability and Information Credibility

Improving your team’s approach to measuring source reliability and information credibility starts with these key factors:

  • Historical Accuracy and Track Record: This is the most important factor. Does the source have a proven history of providing timely, accurate, and valid information? Sources build trust and improve their reliability rating over time.
  • Reputation and Expertise: Is the source well-established and recognized in the security community? Do the authors demonstrate genuine expertise through original research, publications, or other contributions?
  • Data Provenance and Methodology: Can the source explain where its data comes from and the methods used to analyze it? Transparency here is a strong indicator of a mature and reliable intelligence provider (information credibility).
  • Objectivity and Bias: Is the source’s reporting objective, or is it trying to sell a product or push an agenda? Do they use estimative language to quantify their assessments? Be aware of potential biases and distinguish between inherent bias and deliberate misdirection. 
  • Proximity to the Source: How close was the source to the information it is reporting? Primary sources that have direct knowledge are generally more reliable than secondary sources that are passing along information.

These are essential criteria to consider when evaluating intelligence sources. By documenting and consistently applying these criteria, you can move from subjective assessments to a standardized, defensible process for rating every source in your intelligence pipeline. 

If only there were a standardized criterion you could use to evaluate these key factors… luckily, there is!


The Admiralty Code: A Universal Language for Source Reliability

To standardize the evaluation of intelligence, many organizations use the NATO Admiralty Code

This robust framework provides a standard, unambiguous language for discussing source reliability and information credibility when performing intelligence analysis. Its widespread adoption in CTI means that when one analyst shares a report rated “B2” with another organization, there is no confusion about the perceived quality of that data. 

Without using a standardized approach, critical terms like “trusted source” or “likely threat” become subjective and open to dangerous misinterpretation. 

The Admiralty Code is often used alongside the Traffic Light Protocol (TLP) for sharing sensitive information. TLP is an information grading system that ensures sensitive information is shared with the right audience via simple “traffic light” designations.

The Admiralty Code uses a two-character rating that combines the assessment of the source’s reliability with the information’s credibility. Let’s take a look at what these codes mean.

Admiralty Code
Source

Source Reliability (A-F)

The first code assigned relates to the source’s reliability. This rating assesses the trustworthiness of the source itself, based on its track record and competence.

Possible source reliability ratings:

  • A – Completely Reliable: No doubt about the source’s authenticity. It has a history of complete reliability.
    • CTI Example: Verified technical indicators from a trusted internal sandbox, corroborated by multiple, independent top-tier threat feeds.
  • B – Usually Reliable: Minor doubt, but the source has a history of providing valid information most of the time.
    • CTI Example: A reputable commercial threat intelligence provider or a well-regarded private sharing community (ISAC).
  • C – Fairly Reliable: Doubts exist, but the source has provided valid information in the past.
    • CTI Example: A known security researcher on social media who is often correct but doesn’t have a formal vetting process.
  • D – Not Usually Reliable: Significant doubt, though the source has occasionally provided valid information.
    • CTI Example: An anonymous user on a dark web forum with a mixed history of credible and false claims.
  • E – Unreliable: The source lacks authenticity and has a history of providing inaccurate information.
    • CTI Example: A known disinformation outlet or a source caught in a deliberate false flag operation.
  • F – Reliability Cannot Be Judged: There is not enough information to evaluate the source.
    • CTI Example: A brand-new blog or a newly appeared handle on a hacking forum.

Information Credibility (1-6)

The next rating assigned relates to the credibility of the information the intelligence source is providing. This rating assesses the likelihood that the information itself is accurate, regardless of the source.

Possible information credibility ratings:

  • 1 – Confirmed by Other Sources: The information is logical and corroborated by other independent, reliable sources.
    • CTI Example: A specific TTP is reported by three different cyber security firms, each with its own primary research.
  • 2 – Probably True: The information is not confirmed, but is logical in itself and consistent with other information on the subject.
    • CTI Example: A report of a new malware variant that perfectly aligns with the known modus operandi of a specific threat actor.
  • 3 – Possibly True: The information is reasonably logical but lacks corroborating evidence.
    • CTI Example: A single, unverified claim of a data breach that seems plausible but has no supporting evidence yet.
  • 4 – Doubtful: The information is possible but not logical. There is no other information on the subject.
    • CTI Example: A claim of a new, unprecedented attack vector that seems technically far-fetched.
  • 5 – Improbable: The information is illogical and contradicted by other reliable intelligence.
    • CTI Example: A report claiming a threat actor used a specific tool, but forensic evidence from the incident directly refutes it.
  • 6 – Truth Cannot Be Judged: It is impossible to evaluate the validity of the information.
    • CTI Example: An anonymous paste site post claims a massive breach but provides zero verifiable details.

Putting It All Together: From A1 to F6

By combining the two ratings, you create a robust, at-a-glance assessment. These assessments factor in source reliability and information credibility, so analysts don’t need to guess how accurate their sources are.

Here are some examples of Admiralty Code ratings and subsequent actions to be taken:

  • A1 (The Gold Standard): A completely reliable source provides information confirmed by others.
    • Action: This is high-confidence, actionable intelligence. Act immediately.
  • C3 (Proceed with Caution): A fairly reliable source provides information that is possibly true.
    • Action: Worth investigating, but seek corroboration before taking significant action.
  • E5 (Ignore and Discard): An unreliable source provides information that is improbable and contradicted by other intelligence.
    • Action: Likely disinformation. Discard unless there’s a compelling reason to analyze the source’s motives.
  • D4 (Monitor for Developments): A not usually reliable source provides doubtful information.
    • Action: Keep this on your radar. While unlikely, it’s worth monitoring to see if further evidence emerges.

Other Grading Systems

Another popular framework, particularly within law enforcement communities, is the Police 5x5x5 System. In this model, the first ‘5’ evaluates the source, the second ‘5’ evaluates the information, and the final ‘5’ is how the information should be handled (e.g., who to share it with).

While simpler than the Admiralty Code, it shares the same foundational goal: to enforce a structured, evidence-based approach to evaluating intelligence. That said, translating the 5x5x5 system to cyber threat intelligence is considerably more challenging.

The 5x5x5 system was replaced by the simpler 3x5x2 model in 2016 for UK police forces. The newer model follows the same structure, just with fewer grading options.


Walkthrough: Grading Source Reliability in Practice

Let’s walk through a realistic scenario to see how this works.

The Scenario: You discover a post on a mid-tier hacking forum by a user named “GlitchFactor.” The post claims to have the source code for a new, undetected ransomware variant called “DataWipe” and includes several code snippets. 

Objective: Your goal is to determine the reliability and credibility of the information provided by “GlitchFactor.”

Grading Source Reliability and Information Credibility in 3 Steps

Step 1: Evaluate the Source (“GlitchFactor”)

  • Historical Accuracy: A search for “GlitchFactor” reveals they have a limited post history. Of the five previous posts, three were generic comments, one was a moderately accurate claim about a vulnerability that was later confirmed by others, and one was an unsubstantiated rumor that ultimately proved unfounded. This is a very mixed track record.
  • Reputation and Expertise: The user is not a well-known entity in the security community. Their status on the forum is average, with no special reputation or privileges.
  • Conclusion for Source: Based on the limited and mixed track record, you have significant doubts about this source’s competency and trustworthiness. You rate the source as D (Not Usually Reliable).

Step 2: Evaluate the Information (The “DataWipe” Ransomware)

  • Corroboration: You search all available OSINT and commercial intelligence feeds for any mention of “DataWipe” ransomware. The search comes up empty. The information is entirely unconfirmed by any other source.
  • Logic and Consistency: You examine the provided code snippets. The code appears plausible for a ransomware payload; it includes functions for file encryption and communication with a command-and-control (C2) server that align with known ransomware tactics, techniques, and procedures (TTPs). The information itself is logical, even if unverified.
  • Conclusion for Information: Since the information is logical but completely uncorroborated, you rate it as 3 (Possibly True). It’s not “Probably True” (2) because there’s no other intelligence to support it, but it’s not “Doubtful” (4) because the technical details seem sound.

Step 3: Combine and Act

  • Final Rating: The combined Admiralty Code rating for this piece of intelligence is D3.
  • Action: A D3 rating should not be ignored, but it certainly doesn’t warrant waking up the CISO. The intelligence comes from a source with low source reliability but contains plausible details (credible information). The correct action is to create a low-priority task for further monitoring.

Now you know how the Admiralty Code works, let’s explore some of the challenges of applying this framework to real-world CTI work.


The Challenges of Grading Source Reliability

The Admiralty Code framework was designed for evaluating “bomb and bullet” military intelligence, usually gathered through covert human sources. As such, applying this traditional intelligence framework to the cyber domain is not without its challenges.

Here are some of the challenges you will likely encounter when trying to apply the Admiralty Code to CTI sources.

Source Reliability and Information Credibility Grading Challenges

Pace and Volume

The cyber threat landscape moves at a blistering pace. New threat actor handles, malware variants, and attack campaigns emerge daily. This makes it difficult to establish historical accuracy for new sources. 

Many of the sources you encounter will have a low source reliability rating because they lack a proven track record of accuracy. Don’t expect to see many cutting-edge sources with a grade higher than a C.

Circular Reporting

This is a significant problem that creates an echo chamber of false confirmation. It happens when multiple outlets report on the same piece of intelligence, but all trace back to a single, unvetted original source.

This is common in CTI as vendors and news organizations are keen to jump on the latest threats, even if the credibility or reliability of the information is unproven.

Subjectivity

Even with a defined framework, grading has a human element. An analyst’s personal experience, cognitive biases, or even a past negative interaction with a source can influence their rating.++

This is why it’s vital to use Structured Analytical Techniques (SATs) in your intelligence analysis. Techniques such as Analysis of Competing Hypotheses and the Diamond Model help you evaluate data, demonstrate your methodology, and clearly communicate your analysis work.

Deliberate Deception

This is perhaps the most dangerous challenge. Threat actors are aware that analysts are watching, and they actively employ disinformation and false flags to mislead them. This makes the assessment of source reliability and information credibility even more crucial.

So, how can you overcome these challenges? Let’s take a look at some best practices you can follow to make grading CTI sources easier. 


Best Practices for Determining Source Reliability

To help you get started grading source reliability and information credibility, here are some practical tips. These tips will allow you to overcome many of the challenges associated with grading sources and build a reliable intelligence function.

  1. Don’t Reinvent the Wheel: Adhere strictly to the universal criteria of the Admiralty Code. Customizing the definitions undermines their value as a common language for the intelligence community.
  2. Triangulate Your Sources: Never rely on a single source. Aggregate and cross-reference intelligence from multiple, diverse sources to verify information and improve your assessment of source reliability and information credibility.
  3. Train and Calibrate Your Team: Ensure every analyst understands the grading system. Conduct regular tabletop exercises with real-world scenarios to align your team’s assessments and ensure consistency.
  4. Audit and Review: Source reliability is not static. A source that is reliable today may not be tomorrow. Regularly audit your sources and retire those that no longer provide timely, accurate intelligence.
  5. Participate in Sharing Communities: Engage with Information Sharing and Analysis Centers (ISACs) and other peer groups. These communities provide a valuable forum for exchanging vetted intelligence and gaining peer validation.
  6. Trust, but Verify: Finally, remember that ratings are a tool to aid judgment, not replace it. Even an A1 intelligence report deserves critical scrutiny. Always question, always verify.

Conclusion

Ultimately, trust is the core commodity of cyber threat intelligence. It is not freely given; it must be earned through a source’s consistent accuracy, measured with objective frameworks, and constantly re-evaluated as the threat landscape shifts.

By implementing a disciplined framework, such as the Admiralty Code, to assess source reliability and information credibility, CTI teams move beyond guesswork. They build a defensible, repeatable process that allows them to cut through the noise, combat misinformation, and transform raw data into accurate, actionable intelligence.

They can offer the kind of high-confidence insights that empower an organization to build a truly resilient security posture.

Frequently Asked Questions

How Do You Check if a Source Is Reliable?

You check a source’s reliability by evaluating it against established criteria, not just gut feeling. This involves assessing its historical accuracy, reputation in the security community, author expertise, transparency of data collection, and potential biases. A great intelligence framework for performing this analysis is the Admiralty Code.

What Indicates a Reliable Source?

A reliable source demonstrates key traits: a proven record of accurate and timely intelligence, transparency regarding data and methodology, objective and fact-based reporting free from sensationalism, and corroboration by other trusted sources. This criterion confirms its place in the consensus of the intelligence community.

What Makes a Source Unreliable?

A source is considered unreliable if it has a history of inaccuracies, bias, or the promotion of a particular perspective instead of presenting facts. A lack of transparency is also a red flag; if you don’t know how it obtained the information, it is untrustworthy. Treat anonymous sources with skepticism, considering whether they are involved in spreading disinformation.

What is the NATO Admiralty Code?

The NATO Admiralty Code is a standardized system for grading intelligence reports. It uses a two-character rating: a source reliability grade from A to F and an information credibility score from 1 to 6. This dual system enables quick, consistent assessments, ensuring clear communication between analysts and decision-makers.