Operating system hardening: disable unused services, close ports, apply patches, remove unnecessary accounts and software, enforce password policy, enable logging and monitoring. Frameworks like CIS Benchmarks or STIG provide the baseline for Windows, Linux and network gear.
What is hardening
Hardening is the systematic process of strengthening systems, applications, networks and identities by removing what is unnecessary and enforcing secure-by-default configurations. In practice it means turning off services and protocols nobody uses, applying critical patches, closing ports, tightening permissions, replacing "out of the box" credentials and settings, and leaving an auditable trail. It is proactive defence: shrinking the attack surface before an attacker or an automated scanner finds it.
Why it matters
Most systems ship configured for easy rollout, not for security: dozens of services started by default, administrative accounts with well-known passwords, open access controls and minimal logging. In that posture, every public vulnerability turns into an entry point almost immediately, because attackers scan as soon as a CVE is published. A hardened machine is not invulnerable, but it forces the attacker to work for every step: fewer exploits apply, fewer privileges to inherit, more noise to generate. Most breaches that end in ransomware or mass exfiltration do not start with a zero-day; they start with an exposed service, a default password or a legacy protocol. Hardening is also compliance: ISO 27001 (A.8.9), NIS2, DORA and sector frameworks require it explicitly, and CIS Benchmarks or DISA STIG are the technical references most organisations use to demonstrate it.
Key points
Application hardening: remove default modules, strengthen input validation, rotate initial credentials and tokens, enable HTTP security headers, apply updates, and put a WAF in front tuned for the business — not only the vendor's default ruleset.
Network hardening: segment by zone (users, servers, OT, DMZ), keep management interfaces off the public Internet, enforce modern TLS, disable legacy protocols (SMBv1, unsigned LDAP, telnet) and control east-west traffic, not only north-south. It is the natural complement to network segmentation and Zero Trust.
Identity and cloud hardening: least-privilege service accounts, key rotation, MFA on cloud consoles, buckets private by default, reviewed IAM policies, legacy auth disabled. In cloud environments the attack surface opens and closes faster than anywhere else.
Reproducible hardening with Infrastructure as Code: baselines are written as code in standard IaC tools and applied at scale, not manually server by server. This prevents configuration drift and lets you show an auditor when, by whom and how each control was applied.
Continuous process, not a project: every relevant vulnerability, every architectural change and every new version of the benchmark means revisiting and recalibrating. Without compliance monitoring, posture degrades within months.
Example: Hardening a Linux server exposed to the Internet
A freshly provisioned Linux server for a public web application arrives with SSH on its default port, a password user, a recursive DNS server on, SNMP with the "public" community, SMB and NFS "just in case", and the local firewall disabled so developers can debug comfortably. In that state, the first Internet scan picks it up within hours, and any reused credential or unpatched CVE turns it into an immediate path into the internal segment.
The hardened version trims the perimeter methodically: only SSH (public-key only, MFA for admin accounts, fail2ban), HTTP/HTTPS behind the load balancer, every other port closed by firewall; DNS, SNMP, SMB, NFS and every unused package are removed; a CIS baseline is applied to the OS, with SELinux or AppArmor in enforcing mode, tightened POSIX permissions and log forwarding to the SIEM; disks are encrypted at rest and a monthly patch cycle is established with emergency windows for critical CVEs. The result is not an invulnerable server, it is a server that no longer shows up at the top of the attacker's queue and that, when something does go wrong, leaves traces the SOC can work with.
Common mistakes
- Treating hardening as a document you write once and archive. Baselines age, new services appear, OS versions change; without periodic review, paper compliance no longer matches the real posture.
- Applying controls only in production and leaving staging or lab environments untouched. Those are precisely the environments that usually connect to production, hold valid credentials and receive the least scrutiny.
- Hardening only the OS and ignoring everything else: applications, databases, middleware, network gear and cloud accounts stay on defaults because 'that is not our area'. Attackers do not respect those org charts.
- Confusing hardening with total security. A hardened system is still vulnerable to zero-days, phishing and business logic errors; it only closes the easy doors. It must coexist with detection, response and backups.
- Over-hardening that breaks the business: blocking legitimate binaries with overly strict AppLocker rules, closing ports that a payment system needs, disabling protocols the ERP still uses. When power users find a shortcut, hardening evaporates.
Related terms
Related services
This concept may be related to services such as:
Frequently asked questions
Who should own hardening in an organisation?
Ownership must be distributed but coordinated. Infrastructure typically owns OS and network hardening; development and DevOps own application and pipeline hardening; identity and cloud teams own accounts and IAM; and the CISO defines the standard, measures compliance and signs exceptions. The common trap is letting each team define its own baseline: the result is inconsistency and a painful audit. What works is a shared standard —based on CIS, STIG or sector frameworks— translated into reusable templates and pipelines.
How do you keep hardening from degrading over time?
By automating both the application and the verification. Baselines are codified as Infrastructure as Code with standard IaC tools and applied on every deployment, not manually. In parallel, compliance-as-code or posture-management tooling compares reality against policy and alerts when deviations appear (a service re-enabled, an account with more privileges than agreed, a firewall rule opened). Integrated with SIEM and change management, hardening stops being a project and becomes an operational control.
Does hardening hurt performance or availability?
On modern systems the overhead is low when done sensibly. Some layers —verbose logging, at-rest encryption, TLS inspection— have a cost and must be sized. The bigger operational risk is not CPU, it is breaking an integration that relied on an insecure configuration (SMBv1, basic authentication, a wide-open port). That is why changes are tested first in staging, rolled out in waves and documented. The residual risk of not hardening always outweighs the cost of doing it properly.
Is there a big difference between Windows and Linux hardening?
The principles are the same: minimise services, apply patches, reduce privileges, protect credentials, log and monitor. The tooling changes: Windows is hardened with Group Policy, AppLocker/WDAC, the built-in EDR and domain policies; Linux with firewalld/nftables, SELinux or AppArmor, POSIX permissions and sudo modules. CIS Benchmarks and STIG publish specific guides for each system and version, so organisations can audit real state against the standard without reinventing the wheel.