noze. Open Intelligence, Secure Governance

The payoff that gives a name to over 25 years of AI and Cyber Security built together, by the same team, on the same projects.

nozeAICyber SecurityDigital HealthWebOpen SourceR&DGovernance nozeOpen IntelligenceSecure GovernanceAICyber SecurityPayoffPositioning

A payoff, not a change of course

Open Intelligence, Secure Governance is the decision to give a name to what noze has been doing for over twenty-five years. Not a new positioning, but the choice to communicate it clearly — because an identity built over time deserves to be told.

Two principles that have always been inseparable at noze: open intelligence — built on transparent models, code and standards — and secure governance — designed from the outset to protect, comply and make every system verifiable.

One multidisciplinary team, not two departments

Today the market treats AI and Cyber Security as parallel disciplines: AI teams try to bolt on security after the fact; security teams try to integrate AI as an accelerator. At noze the two competences have never been separated.

The reason is organisational: the people who design machine-learning models and the people who analyse vulnerabilities and configure pentests are used to working together — on-site and remotely — on the same projects, with shared tools and no handoffs between departments. In practice:

  • Every model is designed with PII redaction, audit trails and access policies from the first prototype — not as a layer added after deployment.
  • Vulnerability assessment uses risk-prioritisation models, not just static rules. The compliance engine automatically classifies AI systems under the EU AI Act.
  • Governance policies apply in real time to both AI output and user input — ALLOW, BLOCK, REDACT.

Where the products come from

On-premise architectures with local LLMs grew out of years of work on clinical data that could not leave the hospital perimeter. The AI risk classifier is the direct result of compliance projects already under way with clients. Continuous vulnerability monitoring consolidated while addressing concrete needs of SMEs and public bodies.

There was no strategic decision to get ahead of the market. There was a daily practice of working on AI and security that, over time, produced mature tools just as regulation and the market made them necessary.

The current stack

Five products, each with a specific technical scope:

  • CyberScan — vulnerability scanner with continuous asset discovery, automated pentesting and ML-based risk prioritisation. Output: technical reports, CVSS scoring, remediation plans.
  • DataGovern — on-premise compliance platform. Covers GDPR (processing register, DPIA, automated DSARs), NIS2 (gap analysis, remediation plan) and EU AI Act (AI risk classification, technical documentation). Single deployment, local data.
  • IntelliPA — workstation with local LLMs for Public Administration. RAG over internal documents, chatbot over the entity’s regulations, no data transmitted to external services. MePA-purchasable.
  • AIHealth — clinical support suite: RAG over FHIR/DICOM data, diagnostic support models, remote follow-up. On-premise architecture, planned MDR pathway. Developed with experience from the Meyer and CNR projects.
  • Admina Enterprise — enterprise version of the Open Source Admina framework (Apache 2.0). Proxy for AI calls with audit trail, PII redaction and bidirectional policies. Written in Python and Rust.

Strategic plan 2026–2028

The three-year plan includes the development of new products extending the AI + Cyber stack and the consolidation of existing consulting services — CISO-as-a-service, cloud-native architectures, AI governance, applied R&D — within the same positioning.

Need support? Under attack? Service Status
Need support? Under attack? Service Status