Why We Invested in Above Security
The problem security wasn’t designed to solve, and the behavioral intelligence layer built to address it
There’s a strange irony in enterprise security. Organizations have never spent more to defend themselves. Firewalls, endpoint detection, identity platforms, SIEMs. The modern security stack has never been more sophisticated, and the external perimeter has never been better protected.
Yet some of the most damaging breaches don’t come from outside at all.
Roughly 60% of enterprise breaches now involve the human element, whether employees, contractors, or other authorized users operating within legitimate access. Often, these incidents aren’t malicious. They’re negligent.
These are not edge cases. They are the dominant failure mode.
The average incident costs $17.4M to remediate.
In nearly every conversation we had with CISOs over the past year, insider risk ranked as a top-two concern. Almost no one felt their current tools were doing the job.
The problem is about to get orders of magnitude harder.
A Patchwork That Was Never Designed to Work
The tools exist. They just weren’t built for this.
Today’s insider risk programs are assembled from UEBA platforms, IAM tools, DLP, session recording tools, and SIEM stacks. Each covers a slice of the problem; none owns the whole.
These tools were designed around discrete events and static policies. They see what happened. They don’t understand why. The result is programs that are reactive, investigation-heavy, and buried in noise. They explain breaches after the fact rather than preventing them. Significant budget sits trapped in tools that haven’t delivered, and buyers are actively looking for something better.
This isn’t just a tooling gap but an architectural one: a mismatch between how insider risk actually manifests and how security systems were designed to detect it.
Insider risk rarely announces itself in a single moment. It lives in sequences, in the full arc of someone’s behavior that only becomes legible when you connect it across time.
An employee who has been quietly researching competitors, accessing sensitive files they’re authorized to use, and uploading to a personal account hasn’t done anything obviously wrong at any individual step. A risk like this only becomes visible in the arc, and legacy tools were never designed to see it.
Now layer in what’s happening with AI. Employees are routing sensitive data through personal AI tools and delegating tasks to agents operating inside corporate systems. These agents have real access, take real actions, and operate at machine speed but are largely invisible to existing insider risk programs.
As Aviv Nahum, Above’s CEO, puts it, “AI agents are becoming insiders in everything but name.” The perimeter of who, or what, counts as an insider is expanding faster than any existing tool was built to handle.
A New Model for Insider Risk
Above Security is building what this category has needed from the start: a platform that models intent, not just activity, across all actors, machine and human alike.
It connects to the systems where work actually happens: the browser, SaaS applications, collaboration tools, and AI workflows, and continuously builds a behavioral model for each individual identity in the organization. Above’s platform identifies what’s normal for this person (or agent) in this role at this company.
When that context shifts, when sequences of behavior start pointing somewhere concerning, Above surfaces it, not because a rule was tripped, but because the underlying intelligence detects a change.
This produces a quality of signal the market hasn’t experienced before.
Above models behavior longitudinally, anchored in individual context, to eliminate the noise inherent in event-based systems. Without relying on rules, policies, or manual configuration, it surfaces high-fidelity signals so security teams can prioritize and act with confidence.
When risk emerges, Above intervenes in the moment, guiding users in real time before an action is completed rather than alerting after the fact. When investigation is required, it generates a complete, cross-functional evidence trail that is immediately usable by security, HR, legal, and compliance, eliminating workflows that typically take weeks or months.
The foundation they’re building is designed to compound. A deep, longitudinal model of how every identity behaves, human or machine, becomes infrastructure that naturally expands to new threat vectors as the way work evolves.
This is not something incumbents can easily replicate. It requires a fundamentally different data and modeling architecture than event-driven systems were built on.
Clarity in a Confused Category
Aviv Nahum comes from Israeli intelligence, where behavioral modeling is core to the mission, and has spent his career at the frontier of AI and agentic systems.
What sets him apart, beyond technical depth, is his clarity of thought. Insider risk is a category defined by confusion, often conflated with DLP, folded into UEBA, and fragmented across legacy silos. Aviv sees the problem with unusual precision: why prior approaches have failed, and exactly where Above fits. We saw this clearly from our first conversation.
Why We Invested
The insider risk category is at an inflection point. Budgets are moving, urgency is high, and the last generation of tools has largely failed.
What the market needs now isn’t another point solution. It’s a platform that owns the behavioral layer, improves over time, and compounds in value the longer it runs.
Above is building that future. We’re proud to partner with Aviv, Amir, and their team.
This article is for informational purposes only and does not constitute investment advice. Jump Capital is an investor in Above Security. Views expressed represent the opinions of the authors and Jump Capital. Forward-looking statements involve risks and uncertainties, and references to specific companies and their capabilities do not constitute investment recommendations or guarantee future performance.
Front Page