Protect to Enable: How to Integrate Security and Business
Throughout my career, I have seen the same pattern repeat itself over and over: for many business or operational decision-makers, security appears as a problem, a brake, something that “complicates”, “delays” or “makes things more expensive.” Not as an enabler, not as part of the design, but as an annoying layer imposed from the outside. And the truth is, in many cases, they are not entirely wrong.
The error doesn’t lie so much in the business perceiving security as an obstacle, but in assuming that this is inevitable. This is where the founding myth is born: the idea that security and business are opposing forces, condemned to never understand each other. As if improving one necessarily implies worsening the other.
And that is a comfortable narrative. For the business, because it allows them to blame “the security people” when something doesn’t move forward. For security, because it justifies rigid controls under the argument of “I’m just managing the risk” and the perceived weaknesses that “the business is not on board.” Everyone remains relatively satisfied… and the system remains poor.
This approach clashes head-on with an idea that Intel formalized years ago under the slogan “Protect to Enable”: security does not exist to say “no,” but to allow the business to move forward securely. Compared to what one commonly sees, it implies a complete shift in mindset. Protecting is not the ultimate goal; the ultimate goal is to enable.
Regarding the implementation of this idea at Intel, there is the book “Managing Risk and Information Security: Protect to Enable” by Malcolm Harkins, former Vice President of Security and Information Risk Management at Intel.
When this philosophy is ignored, security is designed in the abstract: as a collection of controls to be imposed, disconnected from how people actually work. That’s where we see VPNs for everything, blocks at the slightest doubt, access requests that take weeks, manual exceptions, and opaque processes. It is not solid security: it is clumsy security, controls incapable of coexisting with real operations.
The business, for its part, rarely expresses this in technical terms. It doesn’t say, “this control doesn’t mitigate risk well” or “this introduces more indirect attack surface.” It says something much more brutal and honest from its perspective: “I can’t work” And when working becomes difficult, people don’t stop to debate threat models: they look for shortcuts.
This is the point where friction becomes dangerous. When security slows down the business, the risk doesn’t disappear. It shifts, often becoming invisible. That is where “shadow IT” appears, along with shared credentials, parallel workflows, and “temporary” solutions that last for years. The business keeps moving forward, but off the security radar.
The Protect to Enable philosophy aims precisely to break that vicious cycle: designing controls that accompany the business flow instead of fighting against it. Controls that make doing the right thing easier than evading them. Because if the only way to move forward is to bypass security, the system has already failed, even if all the checks in the regulatory framework are green. The problem isn’t “security vs. business.” The problem is security designed without understanding the business. It is confusing rigor with rigidity. And breaking this myth is the first step toward something more honest: accepting that security doesn’t compete with the business, but it does compete with unnecessary friction.
When security is designed out of fear
A large portion of the security controls we see in organizations suffering from the aforementioned issues are not born from a serious risk analysis, but from something much more primitive: fear. Fear of a recent or future incident, of an approaching audit, or—even worse—of being exposed in front of the hierarchy.
The examples are well-known. After a ransomware attack, everything is blocked “until further notice.” Following a successful phishing attempt, layers of friction are added to access and more filters are placed on email and browsing. When an audit arrives, last-minute controls appear that aren’t very well understood. There is no redesign or systemic review: there is only reaction.
This type of decision-making has a very clear emotional logic. The human brain is poorly wired to handle abstract, low-frequency risks. The “ Availability Heuristic ” explains that people estimate the probability of an event based on the ease with which they can recall similar examples. Thus, a concrete, visible, and recent incident carries more weight than a hundred well-modeled threats. This is the availability bias in action: what just happened feels more likely, more dangerous, and/or more urgent, even if statistically it is not.
Added to this is another key factor: blame aversion. In many organizations, security is not measured by its effectiveness, but by its ability to prove that “something was done”. Adding a visible control reduces internal anxiety: if something happens again, at least one can say the posture was hardened. In this context, the control doesn’t have to be good, it has to be defensible.
This leads to decisions like forcing massive password changes, blocking legitimate access, imposing endless manual processes, or requiring absurd approvals. These are controls that reassure the person who signs off on them, but not the person who has to use them.
The problem is that security designed out of fear tends to be cumulative and asymmetric. Every incident adds a layer, but the previous ones are almost never reviewed. The system becomes heavier, more fragile, and harder to operate. No one asks which controls remain relevant; they just keep adding one more “just in case”.
From a psychological standpoint, this is understandable: fear pushes us to maximize the sense of immediate control, even if it worsens the long-term outcome. But that reflex generates friction, human error, and, paradoxically, more operational risk.
The business, for its part, reacts as best it can—it adapts. Shortcuts emerge, along with informal exceptions, shared credentials, parallel processes and unapproved tools. Consequently, by becoming unfeasible, security stops being part of the real system and becomes something almost decorative.
Designing security out of fear is not an intellectual error; it is a human failure. And recognizing it is key, because as long as decisions are made to soothe anxieties instead of reducing real risk, the gap between security and operations will only continue to grow.
Operational friction: the invisible cost
Operational friction is an underestimated side effect of poorly designed security. It doesn’t appear in audit reports, usually lacks its own KPI, and is difficult to account for, but it is there stealing time, attention, and energy every single day. Every extra step to access a system, every manual approval, every improvised workaround is lost working time. This time leaks away drop by drop and is sometimes hard to perceive because the organization doesn’t stop, but it does become slower and more prone to error.
An example I have seen myself: access to systems that require a VPN even when the service is already in the cloud, with the possibility of implementing strong authentication and proper identity controls. The result of this is not more security, but rather unstable sessions, drops and reconnections, the cost of maintaining a VPN server, and users who end up leaving the VPN open all day “just in case”. The control exists, but its real-world use degrades the system.
Another frequent case: onboarding or permission change processes that take days. From a security standpoint, it is “access control”. From an operational standpoint it is a bottleneck. In these cases, the business does not wait: it shares credentials, reuses accounts or asks for favors. Friction does not eliminate unauthorized access, it pushes it into informality.
Furthermore, all this friction has a significant and counterproductive psychological effect: if interacting with security is constantly frustrating, people stop seeing it as part of the system and perceive it as an external obstacle. The user doesn’t think “this protects me”, they think “this is making me waste time.” From that point on, every control is experienced as something to be bypassed.
Paradoxically, many of these controls designed to reduce human error end up increasing the likelihood of error, because they force people to work under stress, using shortcuts, and without clarity. Security, then, ends up creating the ideal conditions for the very incidents it intends to prevent.
There is also a less visible but deeper cost: burnout. Teams tired of fighting with tools, processes that no one understands, and exceptions that become the rule.
And the most ironic part: from the outside, everything seems to work. The controls are there, the policies exist, access is “managed”. I have seen organizations pass audits this way. But the real system —the one people use to work— operates in parallel. When this point is reached, it no longer matters how many controls are in place. The operational cost outweighs any theoretical benefit, and real risk increases, no longer due to a lack of controls, but due to an excess of friction.
Shadow IT: the visible symptom
As a result of operational friction, “Shadow IT” usually emerges. This term describes the set of tools, services and processes that the business adopts outside of formal IT and security channels.
Shadow IT does not appear because people are irresponsible or “don’t understand security”. It appears when official paths turn out to be slow, rigid, or directly unfeasible. It is an adaptive response; the business does not set out to violate policies, but rather to meet objectives. When gaining access to a tool takes weeks or months, when sharing information securely is harder than sending it via any free SaaS, or when requesting a permit implies halting a project, the informal path becomes the most logical one. In general, I would say it is not a reflective decision but pure human efficiency.
Security traditionally interprets Shadow IT as a discipline problem. Thus, the typical response is to harden controls, block services, and issue stricter policies. In other words: attacking the symptom with more friction, the same mechanism that generated the problem in the first place. But if we look at it from a psychological standpoint, Shadow IT is consistent. People optimize to get their work done with the lowest possible cognitive cost. If the “secure” path is long or frustrating, the brain will choose the shortcut. In this, as in many other things, it is vital to keep human behavior in mind to design systems considering it, rather than fighting against it.
The interesting thing in all of this is that the existence of Shadow IT does not eliminate the need for security; rather, it shifts it off the radar. Unapproved tools still handle sensitive data, shared accounts still exist, and parallel workflows still operate. But now, no one sees them, no one monitors them, and no one improves them.
In mature organizations, Shadow IT is treated as a signal. An alert that controls are not aligned with operational reality. The question isn’t “why are people failing to comply?” but rather “what are we preventing them from doing securely?”. This presents a key opportunity: often, Shadow IT reveals real business needs long before formal processes do. Ignoring it or simply banning it is a waste of valuable information.
Returning to the Protect to Enable philosophy: if you design controls that enable work, Shadow IT loses its purpose. Not because it is specifically prohibited, but because it is no longer necessary. The official path becomes shorter, more reliable, and safer than the shortcut.
Even when security works well, it is expected that Shadow IT won’t disappear entirely, but it will be reduced to isolated and —importantly— visible cases. When security works poorly, Shadow IT becomes the organization’s real infrastructure, and the “official” one becomes an empty facade. Observing this is a very valuable indicator: if you have a lot of Shadow IT, you don’t have a user problem, you have a design problem.
Threat modeling
After seeing how operational friction and Shadow IT emerge when security is designed without context, an inevitable question arises: how do you model security so that this doesn’t happen? The answer doesn’t lie in more controls, but in better decisions. That is where a well-executed threat model comes in. A threat model should not be a document just for compliance or a spreadsheet for an audit. It is a tool for thinking before controlling. To decide what to protect, from whom, how, and at what cost. And also, what not to protect in the same way.
The first mental shift is accepting that not everything has the same value or the same risk. It sounds obvious, but many security architectures are designed as if all systems were equally critical and all access equally dangerous. This leads directly to uniform, rigid, and often expensive controls.
A good threat model starts by understanding the business, not the technology. Which processes generate value? Which data, if compromised, truly hurts? Which outages are tolerable and which are not? Without those answers, any control is a shot in the dark. Ideally, a solid information asset inventory should serve as the foundation for threat modeling.
The second step is identifying realistic threats, not imaginary ones. It’s not about thinking of omnipotent attackers, but prioritizing plausible adversaries given the industry, context and actual exposure. When a threat model exaggerates the adversary, controls will overreact. And overreaction almost always results in operational friction. It is fundamental to evaluate probability and impact without drama. Not every external access is a catastrophe. Not every human error ends in disaster. Mature security accepts uncertainty and works with it, instead of trying to eliminate it through friction.
A third key point is understanding where to place the control. Threat modeling allows us to choose control points that reduce risk without interfering —or with minimal interference— in operations. I’m talking about decisions like protecting identity instead of the network, using automation instead of manual approvals, or prioritizing visibility and actionable alerts over rigid restrictions. For example: if the primary threat is credential theft, imposing VPNs and network restrictions may be less effective than strengthening identity , implementing phishing-resistant MFA and anomalous behavior detection. The right control in the right place is less intrusive and more effective.
Another important point: an honest threat model recognizes that there are risks not worth mitigating entirely, what we call accepted risk. Not because they can’t be mitigated, but because the cost —operational, economic, or human— outweighs the benefit. If well-analyzed and managed, these risks do not represent negligence but rather conscious design.
Finally, a well-executed threat model is a living thing, it’s not done once and archived. It changes with the business, technology, and context. But each iteration tends to reduce friction, not increase it, because it learns from real-world operations. In contrast, security designed out of fear accumulates controls, while security designed from a threat model chooses controls.
Metrics that matter
Metrics are important for evaluating a system’s effectiveness, but it is crucial to know what to measure. If you measure the wrong things, you drive bad decisions, even if the intention is good. In security, this happens quite often: numbers are celebrated that say nothing about real risk or business impact.
In practice, I have seen many organizations that still measure security by visible activity: number of controls implemented, volume of blocks, number of alerts generated, or compliance percentage. These are “comfortable” metrics because they are easy to count and report, but they are dangerous because they reward friction instead of effectiveness. When success is defined as “blocking more”, the only possible outcome is exactly that: more blocks, more friction. On the other hand, organizations that aren’t even clear on what they should be measuring end up using any available number as an indicator of “good security”; which, of course, is not useful for evaluating security either.
A useful metric does not measure the existence of a control, but its effect. It should allow you to answer a simple question: “Does this control reduce risk without damaging operations?” If the metric doesn’t help you see that, it likely serves no purpose.
Some examples of metrics that actually say something:
Mean time to enable legitimate access. Not how long it takes to close the ticket, but how much time passes until a person can actually work. If this number grows, security is introducing friction.
Number and type of active exceptions. Exceptions are not an isolated failure; they are a structural signal. A high number of exceptions indicates that the control does not fit reality.
Actual usage of controls (not just existence). A control that exists but is systematically avoided protects nothing. Measuring effective adoption —for example, direct access vs. shortcut access— is much more valuable than counting configured rules.
Incidents caused by operational friction. Errors caused by complexity: shared credentials, misconfigured access, “temporary” bypasses.
Recovery time for incorrectly blocked access. False positives matter. A system that blocks frequently but takes a long time to correct itself punishes the business.
Ratio of preventive vs. reactive controls. More well-placed prevention usually means fewer interruptions.
User feedback as operational data. I’m not talking about satisfaction surveys, but concrete signals: how many tickets are opened due to friction, how many steps a secure action requires, or how many times help is requested for the same issue.
These metrics have one thing in common: they force you to look at the entire system and not just the isolated security posture. They make the operational cost of the control visible and allow it to be compared against the risk it mitigates. From the logic of Protect to Enable, a control that blocks work is not protecting anything relevant. Measuring security in terms of real impact is the only way to distinguish between controls that enable the business and those that hold it back.
The role of the Security Architect
Considering all of the above, the role of the security architect is to design systems that manage risk deliberately and allow operations to run without unnecessary friction. Or, to put it another way: to design secure systems that people can use without having to think about security.
When security fails, it is rarely due to a lack of controls. It is due to a lack of design. And designing implies understanding how the business works, how people make mistakes, what hurts when something breaks, and what doesn’t. It implies accepting that perfection does not exist and that risk is not eliminated: it is managed.
A security architect should not start by asking, “What control do we apply?” but rather, “What are we trying to enable?”. The difference is subtle, but it changes everything. Instead of imposing barriers, paths are designed. Instead of rigid rules, principles are defined that hold up even when the context changes.
The architect is also the one who translates. They translate technical risk into business impact, and business urgencies into conscious technical decisions. They do not oversimplify, but they do not add unnecessary complexity either. They make trade-offs explicit, even when it’s uncomfortable.
Another key function is resisting the temptation of fear that I mentioned earlier. After an incident, it is common for everything to push toward hardening. The security architect is the one who must stop, look at the entire system, and ask if the new control truly reduces risk or merely soothes anxieties.
Ultimately, Protect to Enable becomes a design criterion. If a control does not enable secure work or does not scale, it is reviewed and then redesigned or replaced.
When security slows down the business, the problem is not the business
Over the years, I have seen countless cases where security and business are in constant tension, almost as if it were inevitable. But in my experience —and from what I have read— this happens not because of a clash of objectives or a true incompatibility, but because of a design problem.
It goes without saying that this article is not about relaxing controls or choosing speed over security. Rather, it is about how fear, misplaced friction, and the wrong metrics push us to build systems that force the business to bypass security in order to move forward.
Protect to Enable is not just an optimistic slogan: it is a design discipline. It means starting from what you want to make possible, modeling real threats, accepting explicit trade-offs, and building controls that reduce risk without shifting the cost onto operations. It means designing controls that work in practice, not just in audits.
Well-designed security does not compete with the business: it accompanies it. When that doesn’t happen, the problem is not that the business wants to move fast, the problem is that security is conceived as an obstacle, and in those cases, it needs to be redesigned.
To close, for anyone who wants to dive deeper into this way of thinking, Protect to Enable is a very good read —I learned a lot from that book— along with any other text that emphasizes design, context, and real-world effects over checklists and prohibitions. Also, if you’d like to discuss this, I’m interested in reading your comments, critiques, and personal experiences. You can find my contact methods in the footer of this blog.