AI

Bootstrapping: The best AI strategy is to avoid learning today’s AI tech


To prepare for a future enriched by artificial intelligence technologies, cybersecurity teams need to avoid learning about AI.

That’s right — don’t learn about AI.

While that might seem unreasonable or just plain batty when we’re talking about a technology expected to be a permanent game-changer for cybersecurity, it’s actually the core proposition of a planning method called “bootstrapping,” a logical approach to safely using a technology in rapid development and transformation.

Bootstrapping — a term derived from “pulling one up by one’s own bootstraps” — can be defined for our purposes as a process that is self-perpetuated and can develop without external input. It suggests that security stakeholders must plan for the near term while explicitly acknowledging that today’s practices can be discarded in the long term.

At a minimum, CISOs must be wary of applying big conceptual ideas about AI to their organization’s doctrine based on near-term experiences alone. After all, first-movers who bank on big ideas about new technology risk a great deal and often end up with solutions that fit yesterday’s conditions more than they reflect today’s opportunities.

Gambling now on today’s AI might not pay off in the future

Security teams, like all elements of business or government practice, want the benefits of AI now lest they lose ground to competitors, adversaries, or other constituents. But it’s a big gamble to expect that short-term gains will morph into the capacity to operate effectively in the future.

In this context, alternative approaches to AI adoption make sense, particularly those that advocate for “attritable” outcomes. When a capability is attritable it is “designed to fail” and only meant to address the problems of today before being discarded. Unfortunately, the operational realities of attritable AI are often murky at best.

So how should stakeholders identify best practices for the near term that are both beneficial and relatively free of strings? When should AI solutions from third-party providers be trusted, given that market trends might not reflect tomorrow’s developments? And who within your team should be empowered to make such decisions?

Bootstrapping offers potential answers and can be employed to help stakeholders find the right concept of AI usage without directly investing in it today.

Preparing for the future without planning for It

How best can an emergent capability be used to upgrade existing operations? Generally, we expect an organization to draw on what they already know as well as throw things at the wall to see what sticks.

On the one hand, relying on the conventional wisdom of past experiences can inform decision-makers as to what reasonably good practices — if not quite best practices — might look like. On the other, letting your team experiment with new technologies might reveal how new techniques can gel with existing operations.

Bootstrapping is a third way forward that doesn’t require big ideas about something like AI from the get-go or sweeping experimentation.

Instead, bootstrapping-in-action involves finding paradigm examples of something, such as a textbook case of AI adoption that was a game-changer for cybersecurity teams during a crisis. Then, we deconstruct these ideal-type examples of a desired outcome to rapidly provide a model of adoption that is accessible, attritable, and customizable.

To bootstrap, learn like a toddler

While this goes against the traditional admonition that we shouldn’t use outlier examples of success or failure to inform practice in isolation, the core idea that we are only planning for the near term means that we are limiting the chance of a misstep today affecting the future.

Importantly, bootstrapping isn’t just about slapping together near-term solutions. Rather, it’s a set of steps critical for innovation under circumstances where we can’t effectively guess at future developments from where we stand today.

Perhaps the best way to understand the approach is with an analogy about childhood learning. When we tell toddlers about numbers, they have no idea about how they drive complex human activities (statistical analysis, for instance). Instead, numbers are words learned perhaps by memorizing a song that uses them ( “One, Two, Buckle My Shoe”).

The song is a placeholder of sorts, enabling the infant to refer to numbers without really using them properly. Over time, this morphs into an ability to identify one object at a time, then to refer to a pair of things, then to count up to five fingers on a hand, and so on.

Finally, by the time a child is four or five, they grasp the systemic concept behind numbers and can generalize their use for increasingly more complex tasks.

Placeholder learning is natural for humans but often gets overlooked by organizations. These structures give us expressive power and normalize our thought processes toward new ways of viewing the world over time.

In the example, while children come to the same results in several different ways, the process clearly helps overcome a core challenge in learning truly novel concepts. In essence, bootstrapping limited use cases to existing touchpoints today provides the social and cognitive context for realizing the critical analogy in the future.

Bootstrapping AI for cybersecurity

Consider a hypothetical company that wants to employ AI systems at the intersection of data loss prevention (DLP), consumer information protection, and customer engagement during the post-incident response phase of a cybersecurity crisis.

Connecting these processes makes substantial sense for companies of all stripes. Natural language processing techniques can be employed to monitor and detect unauthorized exfiltration of sensitive data. Behavioral analytics can find abnormal patterns in data access.


Customer-facing chatbots can provide rapid, personalized context on data leakage to consumers following an incident. Particularly given emerging regulatory conditions like those to be enacted by the recently passed AI Act in the EU, it’s easy to see why you’d want interactive AI capacity.

But the risks are considerable. Obviously, there are serious concerns about the utility of AI systems tasked with providing these services at scale. But there are, more importantly, real concerns about the absorption of sensitive information into training data and subsequent leakage through unintended engagement with consumers or third-party intermediaries. A misstep might create a novel means for would-be attackers to socially engineer future attacks.

Three steps in bootstrapping

So, how can different AI systems and conventional capabilities be blended to obtain new efficiencies and minimize liability?

The first step in bootstrapping a workable, attritable solution is to find an example that typifies the result we most hope to replicate.

The idea is to build a theory of how that ideal outcome in the use of AI for DLP and incident response came about, and so we need paradigm examples of success regardless of where they can be found. Indeed, it is important at this stage that analysts avoid being too fussy about that paradigm. If you’re a CISO for a regional nonprofit focused on veteran support and the perfect example of AI usage along these lines is the case of an Albanian credit reporting agency faced with a data breach, so be it. 

In the second step of bootstrapping, we unpack the case-specific conditions that likely won’t apply to many other situations — we need to understand what interacting conditions allowed AI systems to boost effectiveness and avoid misfires.

Perhaps we see that the Albanian firm, to meet the conditions of enhanced efficacy with minimal risk of data spillage, relied on resources shared with partner cybersecurity contractors but only deployed their response tool under time-limited and data-constrained conditions that could not be escalated without the involvement of a human arbiter.

The third step of bootstrapping is to strip away the situational, cultural or other context of the case comparison to leave only the abstract conditions we see linked to success. What you’re left with is a generalized theory of a specific adoption case for AI that can be compared with the operational imperatives of your own organization.

When the two condition sets align, the result is a model for narrow success in deploying AI replete with the dos and don’ts of others’ experiences.

CISO, know thyself (or at least thy organization’s needs)

Bootstrapping is an analytic approach to building the kind of know-how that is logical and replicable but only intended to work for the near-term and narrow conditions. It sets the conditions for gradual learning about something like AI without actually investing in or seriously onboarding a new overarching operational concept.


Over time, this sets organizations up for learning the critical concepts of AI usage in the future while avoiding overcommitment today.

The key to bootstrapping is self-knowledge. A bootstrapped theory of approach for one organization can be drawn from an ideal example and used to lay out robust, attritable organizational plans for the near term. However, security stakeholders first need to assess their own organizations and determine the conditions that correspond to mission success.

There are a number of ways in which security teams can undertake such self-assessments. Perhaps the most obvious approach is to apply the abduction approach to incidents in your organization’s past. For any particular bright spots, what conditions enabled success and what circumstances stood between that outcome and a less stellar one? Remember, of course, to then strip away the historical context to try and generalize about your organization over time.

It’s useful to assume innovations won’t have longevity

Likewise, analysis of organizations with similar footprints to your own can be critical to defining your own theory of success, to then be compared with narrower ideal-type cases during bootstrapping. This might be firms in your own space or equivalents in other global regions. It might also include analysis of alternative pathways for your own organization.

Ultimately, bootstrapping new AI approaches to today’s practices with little expectation of longevity is perhaps most useful for the core conceptual lesson it conveys.


While it is true that AI developments will likely introduce new paradigms of operation in years to come — transformation fundamentally difficult or impossible to articulate today — actively seeking to jump blindly toward that future AI-enabled posture might be a bad idea.

Most cybersecurity stakeholders need to walk alongside emerging capabilities before they learn to gallop —  bootstrapping is an accessible, clear-cut, and constructive method for doing just that.



Source

Related Articles

Back to top button