Junior CTI Analysts Need to Learn How the Pieces Fit Together
Junior analysts can usually recognize the actor name, copy the IOCs, map the MITRE techniques, and summarize the report. Where they tend to struggle is figuring out how those pieces actually relate back to the organization.
It is not necessarily that they cannot identify the data. It is that they are still learning how the data combines into context: whether it matters, who it matters to, and what someone should realistically do with it.
A report can hand someone a lot of information, but the analyst still has to make sense of it. They have to figure out what is actually being said, what is being assumed, what is useful, what is missing, and what a reasonable next step looks like.
This is where Feynman-style learning can be useful. Not in the generic “explain it like I’m five” way that gets thrown around online, but as a way to test whether the analyst actually understands the thing they just read.
If they cannot explain what happened, why it matters, and what someone should do with it, then there is probably still a gap somewhere.
Where junior analysts usually get stuck
The stuck point usually shows up after the report has already been read.
The analyst has the actor name, the indicators, the techniques, the affected sector, and the recommendation from the source. On paper, the important pieces are there. But the next step is less obvious.
Does the sector mention actually apply to us?
Are the indicators still useful, or are they already stale?
Is the attribution strong enough to shape our response?
Does this require action now, or is it something to track?
Who needs the information: SOC, IT, leadership, customers, or an MSP team?
This is where junior analysts need practice. Not just pulling out the right details, but weighing those details against the organization’s reality.
The report gives them the starting material. The harder part is figuring out what, if anything, the organization should do with it.
Why this matters
Most stakeholders are not asking for a threat report recap.
They are trying to make a decision.
A SOC lead may need to know if detections should change. A vulnerability manager may need to know if something affects patch priority. An MSP may need to know which customers should hear about the issue first. A business leader may only need a simple explanation of whether something is relevant, urgent, or worth watching.
So when a summary says something like:
> Threat actors are using phishing techniques and organizations should monitor for suspicious activity.
That does not really help much.
It sounds fine, but nobody knows what to do with it.
A better answer would explain why the activity may matter, what evidence supports that, what is still unknown, and what someone should check first.
For example:
> This phishing activity may be relevant because it uses invoice lures and credential theft, which could affect teams that handle vendor payments or shared mailboxes. We do not have evidence that it hit our environment, but it is worth checking email security logs, DNS/proxy logs, and authentication activity for related signs before treating it as an incident.
That is more useful because it gives the reader something to work with. It does not overstate the issue, but it also does not leave the reader with vague advice.
That is the difference junior analysts need to practice.
What most people miss
The value of simple explanation is not that it makes CTI “easy.”
It is that it makes the analyst deal with what they actually understand.
When an analyst has to explain a report in plain English, they cannot hide as easily behind copied language. They have to decide what the report is actually saying.
That is when they may realize:
- They know the actor name, but not how strong the attribution is.
- They copied the IOCs, but do not know if they are fresh.
- They saw an industry mention, but do not know if it applies to their organization.
- They listed a MITRE technique, but cannot explain what behavior it represents.
- They wrote “monitor for suspicious activity,” but cannot say what anyone should monitor.
That is not a bad thing. That is where the learning is.
A junior analyst should not be expected to magically know all of this right away. But they do need a repeatable way to practice it, because this is the part that turns information into actual intelligence work.
A simple way to practice
The practice does not need to be complicated.
Take one piece of threat information and walk it through the same questions every time. The point is to slow down enough that the report is not treated like the final answer.
1. Start with one source
Use one report, advisory, malware writeup, actor profile, alert, or internal incident note.
Do not start with ten tabs open. That usually turns into collecting more information before the first source is even understood.
Pick one thing and work through it.
2. Explain it in plain language
Write what is happening without copying the report’s wording.
For example:
> Someone is sending fake invoice emails to get users to click a link and enter their credentials.
That is not fancy, but it shows the analyst understands the basic activity.
If the analyst cannot write a plain version, they probably need to reread the source. Not because they failed, but because they have not actually processed it yet.
3. Separate what was observed from what was assessed
This is one of the most important habits.
Observed means what the source actually showed or stated.
Assessment means what the source, or the analyst, thinks it means.
For example:
Observed:
- The emails used invoice-themed lures.
- The links led to credential harvesting pages.
- The report mentioned organizations in one sector.
- The source listed domains and IP addresses tied to the activity.
Assessed:
- The activity may be financially motivated.
- The campaign may matter more to organizations with heavy invoice workflows.
- The listed infrastructure may no longer be active.
- The targeting may be broad rather than focused on one specific company.
This helps because a lot of junior summaries blur the line between what was seen and what was concluded.
Once that line gets blurry, the reader cannot tell how much confidence to place in the summary.
4. Tie the judgment back to evidence
If the analyst writes that something is likely financially motivated, they should be able to explain why.
Maybe it is because the activity involved credential theft. Maybe the lure was invoice-themed. Maybe the behavior looked more like access or fraud than disruption.
That chain does not have to be perfect, but it should be visible.
The same thing applies to actor attribution. If a report says an actor is involved, the analyst should be careful with how they repeat that.
There is a big difference between:
> Actor X did this.
and:
> The source attributes this activity to Actor X, though this summary has not independently validated that attribution.
That may feel like a small wording difference, but it matters. Junior analysts need to learn when they are stating something as a fact and when they are repeating someone else’s assessment.
5. Write down what is still unknown
A lot of junior analysts feel like unknowns make the analysis weaker, so they avoid them.
That usually makes the analysis worse.
Unknowns help people understand how to use the information.
Examples:
- We do not know if the campaign is still active.
- We do not know if the targeting is broad or specific.
- We do not know if the IOCs are fresh.
- We do not know if related activity has appeared in our logs.
- We do not know if the actor attribution is strong enough to matter for our response.
This is not overcomplicating the work. It is making the limits clear.
A useful CTI summary should not pretend to know more than it does.
6. Translate the issue for one audience
A threat report does not mean the same thing to everyone.
The SOC may care about detections. IT may care about account controls or configuration. Vulnerability management may care about affected products. Leadership may care about relevance and urgency. An MSP may care about which clients need a heads-up.
Junior analysts should practice picking one audience and writing for that audience.
For example, with phishing:
For the SOC:
Check email security alerts, DNS/proxy logs, and authentication events for signs of credential harvesting activity.
For IT operations:
Look for mailbox rules, forwarding changes, or user reports tied to invoice-themed emails.
For leadership:
This is a credential theft concern, not proof that we were compromised. The useful next step is checking whether similar activity touched our users.
The recommendation should be specific enough that someone knows where to start. “Monitor for suspicious activity” is usually too broad. A better version would point to the type of logs, systems, users, controls, or decision the audience should review.
Before calling the work done, the analyst should ask whether the answer helps someone make a decision. If it does not, the writeup may still be useful as notes, but it probably is not finished intelligence.
What to do next
Junior analysts can practice this once a week with one source.
It does not require a full training program. It does not require a perfect lab. It just requires a consistent way to work through the same questions until the habit starts to stick.
Use this template:
```md
Source:
What did I read?
Plain-English explanation:
What is happening?
Observed facts:
What did the source actually show or state?
Assessment:
What do I think this means?
Evidence:
What supports that assessment?
Unknowns:
What is still unclear?
Audience:
Who should care about this?
Recommended next step:
What should that person or team do next?
Decision test:
Would this help someone make a better decision?
```
Rotate the source type so the analyst does not only practice on one kind of material.
One week, use a phishing report.
Another week, use a vulnerability advisory.
Another week, use a malware writeup.
Another week, use an actor profile.
Another week, use an internal alert.
The repetition matters because the same issues keep showing up in different forms.
The analyst learns to notice when a claim is not supported.
They get more careful with attribution.
They stop treating IOCs as the whole story.
They learn to explain uncertainty without sounding unsure of everything.
They start writing recommendations that someone can actually act on.
For team leads, this can turn into a simple review habit.
Have the analyst bring one short writeup each week. Do not only edit the grammar. Review the thinking.
Ask:
- Where did they overstate?
- Where did they assume?
- Where did they confuse evidence with conclusion?
- Where did they recommend something too vague?
- Where did they miss the audience?
- Where did they need more context?
That kind of review builds better analysts faster than just assigning more reports to read.
Closing thought
Junior CTI analysts do not just need more terms, more feeds, or more reports.
They need practice turning the pieces into context.
They need to learn how actor names, IOCs, techniques, targeting, evidence, uncertainty, and recommendations fit together in a way that helps someone make a decision.
That is why Feynman-style learning fits CTI development so well. If the analyst can explain the activity plainly, separate what was observed from what was assessed, show what supports the judgment, name what is still unknown, and give a practical next step, they are moving beyond summary.
They are learning how to produce intelligence someone can use.
The useful question is not only:
> What happened?
It is:
> Does this matter to us, and what should we do next?