Cybersecurity R&D and applied AI: from the lab to operations with evidence
This pillar is the area index for R&D: what we research, what we deliver, where it creates value and how we measure it. For the editorial view (AI landscape, full approach, team and accreditations), see the dedicated R&D page. Here we focus on how we translate research into operational detection, governed automation, controls and audit-ready evidence.
Focus
Threats + applied AI
research with operational context
Validation
Honest criteria
accuracy, cost, risk, auditability
Delivery
To operations + GRC
rules, playbooks and evidence
Built for regulated and demanding environments: governance, execution and defensible evidence.
Execution quality
“Security that runs”: operations + governance + auditability. We don’t stop at diagnosis: we close gaps, verify, and produce defensible evidence.
Focus
Threats + applied AI
research with context
Validation
Honest criteria
accuracy, cost, risk
Delivery
Ops + GRC
rules, playbooks, evidence
What the R&D area covers in practice
- Ongoing Threat Research: emerging techniques, tools and patterns with operational context.
- Defensive AI evaluated with honest criteria: accuracy, cost, risk, traceability and auditability.
- Prototyping with validation criteria defined before starting (if it doesn't pass, it doesn't pass).
- Transfer to operations: detection rules, playbooks, hardening and measurable SLAs.
- GRC integration: controls, metrics and audit-ready evidence (DORA/NIS2/ENS/ISO 27001).
- Proprietary products and technical assets (NormAI, CortexShield) where integration justifies it.
R&D is not a drawer for 'things we tried': it's a process with validation criteria defined upfront. If a prototype doesn't improve operations in a measurable way or doesn't leave useful evidence for GRC, we drop it. Whatever reaches production is integrated with SOC/MDR, incident response, hardening or regulatory controls, with operational documentation and traceability.
For the editorial view of the area — AI and threat landscape, approach in depth, team and accreditations (Pyme Innovadora) — see the full R&D page.
View the full R&D area (landscape, approach and team) →Deliverables (from research to operations and GRC)
Research reports
Synthesis of observed techniques, tools and patterns, with executive context and concrete actions for operations or GRC.
Operable prototypes
PoCs measured on representative data, with formalized decision criteria and estimated operational cost before moving to production.
Capabilities for operations
Documented detection rules, playbooks, runbooks, hardening and automations, with handover to SOC/MDR or IR.
Evidence for GRC
Controls, procedures, metrics and traceability aligned with the applicable frameworks (DORA/NIS2/ENS/ISO 27001).
Typical use cases
Evaluate AI before adopting it
Critical analysis of market solutions: what it improves in operations, what it really costs and what risks it introduces (leaks, hallucinations, vendor lock-in).
Detect emerging techniques
From weak signal to validated rule: research, hypothesis, representative data, testing and documentation ready for production.
Governed SecOps automation
Reduce repetitive tasks in SOC/MDR with automation that leaves evidence and can be audited — not opaque 'magic'.
Program against advanced social engineering
Countermeasures for hyper-personalized phishing, deepfakes and BEC: identity, verification, detection and continuous measurement.
Integrate internal capabilities
Connect existing tools (EDR, SIEM, identity, email, cloud) with criteria for traceability and reusable evidence.
Support leadership on innovation bets
Diagnosis and prioritization: what's worth exploring, how to measure it and how to take it to operations without losing control or auditability.
FAQ (R&D, threats and AI)
What's the difference between this pillar and the in-depth R&D page? ↓
The pillar is the area index: capabilities, deliverables, use cases and direct answers for discovery. The dedicated R&D page (/en/services/research-development/) gives the editorial view: AI and threat landscape, department approach, team and accreditations like Pyme Innovadora. They're complementary: one to decide if we fit your need, the other to understand how we think.
Do you develop your own AI or evaluate AI from the market? ↓
Both, with discipline. We evaluate market AI (including large providers) with honest criteria: real accuracy on representative data, operational cost, risk (leaks, hallucinations, vendor lock-in), traceability and auditability. In parallel, we develop in-house capabilities and proprietary products (NormAI, CortexShield) where integration with operations and GRC justifies it. Our stance: if it can't be audited, it shouldn't be automated.
How do you decide when a capability moves from lab to production? ↓
We define 'acceptable' upfront: success metric, operational cost, residual risk, traceability requirements. We measure on representative data and compare against the current alternative (including 'do nothing'). If the prototype passes the threshold and leaves useful evidence for GRC, it moves to production with handover to operations; if not, we document learnings and drop it. We avoid the 'eternal PoC' loop that never lands.
What do you do with the research: does it stay in reports or reach operations? ↓
It reaches operations. We convert findings into detection rules, playbooks, hardening, regulatory controls and — when it makes sense — technical assets or products. The R&D team works alongside GRC and managed security so the change is real: executive reporting, SLAs, audit-ready evidence and measurable risk reduction.
What proprietary products do you have and where do they fit? ↓
Two main assets: NormAI (AI applied to compliance and governance) and CortexShield. They're listed in the /productos/ catalog with a dedicated page each. They fit when a client needs a specific capability we've already brought from lab to product and wants to leverage the integration with our managed security model and evidence.
What does Pyme Innovadora mean and what do the European projects imply? ↓
Pyme Innovadora is an accreditation from Spain's Ministry of Science and Innovation that recognizes real investment in R&D and provides access to tax incentives and specific funding lines. We take part in initiatives and projects with a European focus to accelerate the transfer of research into operations; we don't publish consortium or programme names here to protect third-party data, but we're happy to discuss them in the right setting.
What’s included in this service area
- Threat Research: emerging techniques, tools and patterns
- Defensive AI: critical evaluation and operational integration
- Prototyping with honest validation (accuracy, cost, safety)
- Transfer to operations: SOC/MDR, IR, hardening
- GRC integration: controls, metrics and audit-ready evidence
- Proprietary technical assets and products (NormAI, CortexShield)
How we work (from assessment to evidence)
-
Step 1
Objective & criteria
Use case, hypothesis, operational requirements and success metric. We define upfront what counts as acceptable (accuracy, cost, risk, auditability) to avoid prototypes that never reach production.
-
Step 2
Prototype & measurement
PoC with representative data and honest measurement. Integration, automation or specific tooling; comparison against the current alternative and real operational cost estimate.
-
Step 3
Iteration & hardening
Hardening, coverage, testing, telemetry and operational documentation. Joint review with GRC (controls, evidence) and operations (playbooks, runbooks, SLAs).
-
Step 4
Delivery & continuous improvement
Deployment, formal handover to operations/GRC and maintenance plan with review cycles. Anything that doesn't improve operations or leave useful evidence is dropped.
Services in this area
Talk to an expert →Is this service area a fit for your case?
We’ll run a short assessment to define scope, priorities, and a realistic roadmap.