
Admina Enterprise
Open AI governance: PII redaction, audit trail, bidirectional policies. Open source AI governance: PII redaction, audit trail, bidirectional ALLOW/BLOCK/REDACT policies.
Discover Admina Enterprise →CyberScan
Automated vulnerability assessment and pentesting. Continuous asset discovery, AI risk prioritisation, built-in NIS2 compliance manager.
Discover CyberScan →DataGovern
Integrated compliance platform for GDPR + NIS2 + EU AI Act. Cross-Regulation Gap analysis, board-ready dashboard, fully on-premise.
Discover DataGovern →
Artificial Intelligence
On-premise AI architectures, local LLMs, RAG, autonomous agents. Consulting, design and development.
Discover →A paradigm born from practice
OISG — Open, Intelligent, Secure, Governed — is an architectural paradigm for autonomous AI systems proposed by Stefano Noferi and published in April 2026 as a technical paper at oisg.ai. It is not a product, a specification, a certification or a regulatory standard. It is a reference framework — as REST was for web services — that codifies in a shared vocabulary practices that the most mature organisations are already adopting, but that today lack a unifying framework.
OISG emerges from over twenty-five years of operational work at the intersection of artificial intelligence, cyber security and governance. noze adopts it as the reference paradigm for its products and services. The convergence with the company’s positioning — Open Intelligence, Secure Governance — is natural: the same principles that have guided the design of the noze stack are formalised in OISG’s four pillars. This is not a deliberate evolution from a brand to a framework, but the recognition that an approach matured over time through concrete projects — from medical device certification to LLM governance, from vulnerability assessment to regulatory compliance — finds its formalisation in OISG.
The problem OISG addresses
In 2026 anyone developing or adopting autonomous AI systems must simultaneously respond to heterogeneous pressures: the high-risk provisions of the EU AI Act (effective August 2026), the security requirements of NIS2 (already in force), the specific threats of autonomous agents (OWASP Top 10 for Agentic Applications, December 2025), the transparency and auditability expectations of ISO/IEC 42001.
The problem is that these frameworks operate in silos:
- Security controls can undermine transparency
- Governance frameworks ignore capability measurement
- Open source practices do not extend to model provenance
- Compliance processes remain disconnected from runtime behaviour
OISG proposes a framework that relates these requirements to one another.
The four pillars
O — Open
System components — models, training methodology, governance infrastructure, protocols, audit logs — are inspectable, reproducible and interoperable by independent parties. This does not necessarily mean open source model weights: it means the decision chain is verifiable.
Metric: what fraction of decision-affecting components can be audited without proprietary access?
I — Intelligent
Capabilities are measured, documented, bounded and aligned with explicitly stated objectives. This includes documented benchmarks, known failure modes, confidence calibration, complete RAG traceability and an explicit taxonomy of agent autonomy.
Metric: can the system produce, on demand, a complete explanation of a specific response within a defined latency budget?
S — Secure
The system is resilient to adversarial manipulation across all interaction surfaces, at runtime, with measurable detection and response latencies. It covers bidirectional injection defence, cryptographic agent identity, transactional kill switch, model supply chain integrity and PII redaction at infrastructure level.
Metric: mean time to detection, containment and forensic-quality recovery if an agent is compromised?
G — Governed
Compliance with regulations, organisational policies and ethical constraints is verified automatically, continuously, with immutable evidence. This includes runtime compliance (not annual audits), a forensic black box with hash-chained logs, proportional risk classification, architectural human-in-the-loop and end-to-end observability.
Metric: how many hours to produce compliance documentation if a supervisory authority requests it?
The feedback cycle
The four pillars are not independent — they form a directed cycle with reciprocal dependencies:
- Open enables Intelligent: inspectable models and open data pipelines permit capability measurement
- Intelligent defines Secure requirements: greater capability and autonomy = larger attack surface
- Secure feeds Governed: security controls produce the telemetry that governance consumes
- Governed informs Open: governance policies determine what can be open, to whom, under what constraints
If any pillar is neglected, the system presents structural gaps: governance without openness limits verifiability; intelligence without security exposes operational risk; security without governance lacks accountability; openness without governance does not control data exposure.
Admina and the OISG implementation
Admina is the open source framework (Apache 2.0) on which most noze products are built. Written in Python and Rust, it operates as a proxy for AI calls with audit trail, PII redaction and bidirectional ALLOW/BLOCK/REDACT policies. The OISG paper cites Admina as an example of auditable governance infrastructure.
Admina implements OISG among its architectural reference paradigms:
- Open: the code is public and inspectable, released under the Apache 2.0 licence. The governance infrastructure is auditable by independent parties
- Intelligent: the proxy manages model interaction while ensuring complete traceability — data source, model version and confidence level are reconstructable for every response
- Secure: PII redaction operates at infrastructure level before data reaches model endpoints. Bidirectional policies filter both user input and model output
- Governed: the immutable audit trail records every interaction, decision and intervention. Compliance policies are enforced at runtime, not during periodic audits
Admina Enterprise is the enterprise version of the framework, with commercial support, SLAs and additional features for production environments. Since noze products — CyberScan, DataGovern, AIHealth, IntelliPA — are built on Admina, the adoption of OISG at the framework level is reflected across the entire stack.
What this means for partners and clients
For those adopting or integrating noze solutions, the adoption of OISG has concrete operational implications:
- A shared vocabulary: partners and clients can evaluate AI solutions against a single framework, instead of navigating separate regulatory checklists for security, compliance, transparency and capability
- Regulatory alignment: the AI Act high-risk provisions (August 2026) require risk management, transparency, post-market surveillance and human oversight — requirements that map to OISG’s four pillars
- Self-assessment: the OISG adequacy test (available at oisg.ai) allows any organisation to map its current state across the four pillars with a quantitative score (0-100)
- Incremental adoption: OISG does not require monolithic adoption. The recommended path starts with instrumentation (Governed), adds runtime security (Secure), ensures auditability (Open) and iterates
- Shared infrastructure: since noze products share the Admina framework, OISG’s architectural guarantees — audit trail, PII redaction, runtime policies — are present uniformly
The context
OISG is proposed by Stefano Noferi and published as an open-access technical paper (DOI: 10.5281/zenodo.19605659), with the site oisg.ai providing the full paper text, deep dives on the four pillars and an interactive adequacy test. The paradigm is intentionally tool-agnostic — applicable to any technology stack. noze adopts OISG as the architectural reference criterion for its product and service stack.
References: OISG Paper v1.0 (April 2026), Regulation (EU) 2024/1689 (AI Act), NIS2 Directive, OWASP Top 10 for Agentic Applications (December 2025), ISO/IEC 42001, NIST AI RMF.