Does COVID-19 impact your strategy?

For most organisations, the answer is ‘yes’. So, if the COVID-19 impact on your business strategy hasn’t made you reevaluate it, the urgent question you must ask is why not?

Strategic Impact

In mid-March 2020, the COVID-19 novel coronavirus pandemic is developing rapidly across the world. The response from national governments is fragmented, with some countries doing better than others at containing the spread and the impact. The speed and scale of the impact on individuals, organisations and society has been surprising and, in our lifetimes at least, unprecedented. The political and economic shocks have been unexpected and severe.

A strategic plan can seem as defined, solid and stable as, say, the Rockefeller Center, but the events of recent weeks require these plans to be responsive and adaptable.In the current situation, with events changing on an almost daily basis, organisations must keep their strategic goals and plans under constant scrutiny and develop contingency plans so you can respond to evolving threats and opportunities rapidly and effectively. Many organisations are not going to survive the current crisis, and the ones that do will probably look very different as a result.

PESTEL (which has a number of variants) is a popular strategy analysis framework, guiding the external analysis to the global issues affecting every country, industry and organisation. COVID-19 has affected at least three of the areas:

  • Politics
  • Economics
  • Society

There may also be impacts related to Technology, the Environment and Legal matters (many countries are passing emergency legislation for example). If we just think about the first three though, it highlights an important point. These aren’t three separate categories. They are perspectives – ways to approach issues to understand them better. Those who are new to PESTEL tend to categorise things as they find them, so the result is a set of lists: political issues, economic ones and so on. This can be misleading though. For example, is unemployment or under-employment a political issue, an economic one, or a societal one? It has all three dimensions, so it is related to each of them. We find it better to categorise the issues as Local, National or Global. So, using the same issue, some countries will rate unemployment as their biggest national issue where others would rate it lower. We have already claimed that COVID-19 has impacts across politics, economics and society, but we would categorise it as global. However, there will be related and resulting issues, some of which may be national or local. Given the illness it causes, it also, sadly, has impacts at a personal level too.

However, there is another aspect of PESTEL that is even more important than the categorisation and isn’t always understood. That issue is complexity.


Complexity arises from the connections between many elements in a situation. When one element changes it can affect any of the others that are connected to it, either directly or indirectly. The effect may not be immediate, so the connection may not be apparent. Complexity explains the law of unintended consequences when a deliberate, careful and often small change later results in something else, often unexpected and not always positive.

How is this relevant to COVID-19 and strategy?

The first point is that the spread and effects of COVID-19 are not an isolated event. This is not only a health issue. We have already discussed the social, economic and political impacts. The link between this new coronavirus strain and an organisation’s corporate and business strategies is complex. It is complex because you can’t just think about the narrow impact of the pandemic, severe and unprecedented as it is. You have, to use a strategy cliche, stand back and try to see the bigger picture.

The challenge is illustrated well by the World Economic Forum (WEF) who produce a Global Risks Report each year. The latest one was published in January 2020. They take information from professional risk managers and academics, among others to assess the global political, economic, societal, technological and environmental risks to quantify their likelihood and impact. They also assess their interactions, which they model and illustrate using an interactive chart. It is a well researched and beatifully produced resource.

The WEF analysis shows that the risk of a pandemic such as COVID-19 is also linked with other global risks, including:

  • Water crises
  • Involuntary migration
  • Social instability
  • State collapse
  • Global governance failure

All of these are risks that organisations need to consider and factor into contingency planning. They won’t affect every organisation, and the impact will vary by location, industry and other factors. However, the immediate economic impact on airlines, hotels, restaurant and so many other organisations is already evident.

The speed of the impact and the daily developments require organisations to evaluate these external risks and keep the organisation’s strategy under constant review, updating and communicating the changes as effectively as possible even when many of the people might be working remotely and feel increasingly disconnected from the organisation.

For example, the impact on our consultancy and training services has been swift and significant. We are reconfiguring these services to enable consultancy and training that has traditionally been delivered in person to be able to be delivered remotely. We have a wide range of e-learning modules that can be used as short standalone training solutions or combined with on-line facilitation to provide remote blended learning and development services.

If you are unsure how to respond to the current crisis, please contact us and we will offer practical advice. The world has changed dramatically and quickly, if your organisation is not changing at a similar speed and to a similar degree, you might want to pause and check if you should be.

Improvement or optimisation?

Does investing in training deliver a return on investment to your organisation?

Training return on investment or not?

As with most questions like this, there are many schools of thought. The first difference is between those who promote measuring training return on investment and others who argue that trying to measure it is pointless.

The mainstream view holds that measurement can help evaluate some kind of return on investment (RoI), although factions argue about the best way to achieve this. Another perspective accepts that, while evaluating RoI may be training’s holy grail, trying to measure it may be pointless. According to Personnel Today, they argue that trying to measure RoI on learning is not cost-efficient, the formulae aren’t robust and most organisations are not even interested in the topic.
Possibly the best-known mainstream approach to training evaluation is Kirkpatrick’s four levels. Essentially the levels are a sequence of levels of evaluation for training:

1. Reaction – participants views often recorded on simple multi-choice questionnaires (often referred to as “smile sheets” or “happy sheets”) – usually at the end of a course or session.
2. Learning – the increase in knowledge, skills, or competence. This is usually assessed during the course, either in practical work or tests.
3. Behaviour – how well delegates transfer learning to their work. This stage seeks to assess changes in behaviour due to training. It may form part of a wider culture change initiative. This usually happens a few months after the training, possibly by interview but more usually some type of observation.
4. Results – the final outcomes from participation in a training program (usually financial, time or performance-based. Particular industries might focus on specifics such as safety for example)
Some practitioners find it difficult to move beyond Levels 1 and 2 when the most valuable information might exist at the later levels. Kirkpatrick-certified consultants “start with the end in mind,” moving back from Level 4 to start with the desired results before designing and developing the programme.

JJ Phillips proposed the addition of a fifth level, to measure RoI which is achieved by putting a value on the achievements that are found at the fourth level and comparing them to the total cost incurred in providing the training.

A provider’s view

From our point of view, as training providers, those in the second school of thought who think measuring RoI is pointless, might only be interested in how well a training course is received by the trainees, so evaluation will typically be based on questionnaires that ascertain the delegates’ perceptions. The mainstream group, however, may ask a training provider to demonstrate a return by using metrics and formulae that they approve of, and these might be different for each organisation, which is a bigger challenge.

Which school of thought is right?

The answer is probably that both viewpoints are valid to some extent and a more specific answer will depend on a combination of factors that are unique to your organisation; however, let’s consider some of the evidence that is available.
An important issue must be the quality of any training. With mandatory and vocational courses, the content and syllabus will be well established, so any questions of quality will tend to be focused on the method of delivery. Management training is different, however. The delivery is still important, but several authors have argued that the content of management training is influenced by fads, fashions and theories peddled by management gurus that aren’t supported by any evidence that they work. Even top business schools such as Stanford and Harvard have been criticised for teaching pseudo-scientific management models.

Investing in existing talent is a strategic issue for organisations, according to research by the University of Portsmouth on behalf of the Chartered Institute of Personnel and Development. Their report explores both qualitative and quantitative metrics and introduces the idea of moving from measures of return on investment to return on expectation, which could be difficult to define.

The Oxford Handbook of Evidence-Based Management contains a simpler example of how the return on investment of a bank’s leadership development programme was measured. Half of the sample of managers were randomly assigned to receive the leadership development and the remainder was left as a control group with no training. The people who worked for the managers were surveyed and the results showed increases in the charisma, intellectual stimulation and consideration of the managers who were trained compared to those who weren’t.

As suggested at the beginning the answer to whether investing in existing talent improves skills and loyalty is not straightforward, but it becomes clearer by using an evidence-based approach.

This is an update of an article that appeared in the New Statesman magazine in 2014.

What does good professional training and development look like?

Professional Training and Development in Context

A simple way of thinking about professional training and development in the workplace is in support of formal learning, while some ‘learning to learn’ skills may also help with informal learning which happens in normal daily life. Developing knowledge in an organisation requires formal and informal learning approaches to be integrated because neither will enable knowledge to be acquired if it is the only approach taken.  Therefore, the first conclusion is that formal corporate training and development should consider informal learning as both a predecessor to and successor of any formal intervention.

Formal learning

Formal learning activities have the goal and process of learning defined by the organisation. Also, it occurs in the work context to develop peoples’ skills and knowledge through a structured programme of lectures, discussions, simulations, role plays and other instructional activities.  The training is planned and directed by a professional trainer, which might seem a good thing, but a key criticism of this formality is that it occurs outside the context of daily practice.

Informal learning

Informal learning is the most prevalent form of workplace learning, integrated with daily work routines, triggered by an internal or external stimulus, maybe unconsciously, and can be both haphazard and influenced by chance.  More strictly, it is an inductive process of reflection and action linked to learning with others.

in-house business process mapping training workshop

CBMSc in-house process mapping

Formal professional training and development

Another important factor in professional training is that it applies to an adult audience. However, the literature is not complete or consistent in defining good practice.

A partial list includes:

  • Sensory stimulation theory
  • Reinforcement theory
  • Cognitive-Gestalt approaches
  • Holistic learning theory
  • Facilitation theory
  • Experiential learning
  • Action learning
  • Adult learning (Andragogy)

Current practice in much professional training and development draws on many of these areas.  For example, facilitation theory underpins much of the corporate training in developed economies by proposing that learning occurs through the trainer acting as a facilitator, establishing an atmosphere in which learners feel comfortable considering new ideas, recognising there is often resistance to changing currently held ideas, assumptions and preferences.  Reinforcement theory supports the common practice of awarding certificates of completion.  The evidence base for learning styles is not solid, but most professional training involves a mixture of activities that address preferences for learning based on Kolb’s research findings that adults learn in four ways through:

  • experience
  • observation and reflection
  • abstract conceptualisation
  • active experimentation

This post doesn’t have the space to discuss the ideas around learners emotional responses, but the influence of external factors on learners emotional states is important and their experiences will shape their openness or hostility to different learning activities.  For example, while some people enjoy role plays, others find them incredibly stressful and disturbing.

Research base

Some of the research in the field is low quality, so there is more work to do.  However, reputable sources do exist and they offer some guidelines that can be applied.  Consequently, we use these in designing and developing CBMSc training services.

For example, there are five key principles for adult training that say it should be:

  • immediately useful
  • relevant
  • welcoming
  • engaging
  • respectful

Knowles has done most in recent years to highlight the importance of understanding how adult learning differs from approaches for children.  His ideas support the five principles above by suggesting that adults:

  • Bring a lot of experience that trainers can use as a resource.
  • Expect to influence what they learn, how they are educated, and how they will be evaluated.
  • Respond to active participation, which should be included in the design and delivery of education.
  • Need to be able to see applications for new learning.
  • Need their responses to be acted upon when asked for feedback.


There is more to learn about the most effective methods for professional training and development.  However, for now, some key points to consider are:

  • Formal training and development is only ever a brief interruption to a constant process of informal learning and try to integrate with this.
  • There should be a clear goal and process
  • Trainers should act as facilitators more than teachers
  • Development should contain a mix of approaches leading to relevant, practical and actionable results
  • Treat everyone with respect

And finally, the main omission from the literature, from our perspective, is the importance of evidence-based content.  There have been some promising initiatives in the airline industry which seems to be taking a lead in upgrading their training to be more evidence-based, and the CIPD is also showing leadership in this area.  We need more safety-critical industries to follow the air transport industry lead, as well as other professional institutions to emulate CIPD.

Read more

Improvement or optimisation?

Optimisation and Improvement – what’s the difference?


“What’s the difference between optimisation and improvement?”.  This is a question I was asked by a Six Sigma Master Black Belt during a recent Business Process Improvement course. They said their managers seemed to use the words ‘improve’ and ‘optimise’ (or ‘optimize’ for our U.S. friends) to mean the same thing.

It is a good question. One of the common problems in organisations is the misuse of language. For example, a simple idea, tactic or plan can sound much more impressive if you describe it as a ‘strategy’.

Here’s the answer I gave, and it seemed to satisfy the questioner, but please feel free to challenge in the comments. I hope there may be better answers:

Improvement or Optimisation?

Improvement generally refers to trying to make something better. It is an action of some sort, either a single initiative in the form of a project, or it can refer to the continuous improvement of a process.

Within any improvement, there will be choices.  For example, if you are working on sales processes, an aim may be to increase revenue. In a not-for-profit organisation, the equivalent goal might be to achieve an increase in funding.  However, in order to do that it may become apparent that it requires an increase in cost.  In this case, the better aim might be to maximise profit.  In doing so, the work might reveal an optimum cost level for any given revenue that will deliver this.  So, improvement can deliver optimisation, as either an immediate output or maybe a long-term outcome. The improvement activity delivers the optimisation.  Therefore, the terms are not equivalent.

For most organisations, an important aim is usually to increase revenue (or funding).  This is often a target for Sales teams,  for example, but it is not a great choice of goal.    If you step back and consider the bigger picture, the reason for wanting to increase revenue is normally to increase profit. and, therefore, increase the return on investment (ROI).  Increasing profit and ROI result from increasing revenue at a faster rate than the associated costs.

The problem is that most revenue increases require higher related costs.  Typically these will be ‘costs of sale’ or variable costs, but in some cases, this will also require an increase in fixed costs.  So, a better aim might be to increase revenue with a minimum cost increase.  As a result, the profit and, therefore, the ROI will increase as much as possible. An increase in revenue that causes an associated reduction in profit, profit margin, or return on investment, is clearly an increase that is not worth pursuing.

Improvement or optimisation?

More than 100%

This leads on to settings the right goals and targets, which is a bigger topic that we will cover in a separate post.  In theory, the sales example is quite simple.  To greatest return, sales targets should be based on net profit.  However, most organisations use the revenue.  Why is this?  In some cases, it will be because of precedent which is an aspect of the “that is how we have always done it” mindset.  However, there are organisations that have tried to move from revenue-based sales targets to using profit, but it isn’t simple.  Revenue is fairly easy to establish.  It is usually defined in the sales agreement.  However, profit is more complex.  While direct costs may be relatively easy to identify, variable costs might not become clear until time has passed.  There are also debates about the best way to allocate overheads.

In this sales example, it may be that the best compromise might be to use analytics to try to identify some correlation between eventual profit and another variable such as the specific product, service or market involved.  Once the relationship has been identified, the compensation can be varied even though it is still linked directly to revenue.


An increase in revenue, funding, or delivery is only an improvement if the associated costs increase at a lower rate.  Delivering the maximum increase for the lowest associated cost is the definition of optimisation.  Understanding the difference between improvement and optimisation is important in many areas, not just operations management.  Being clear about this distinction plus what your real goals are can also help with choosing the right measures and setting the right targets.  The goals, measures and targets that managers use to assess performance will drive behaviours. Driving maximisation when what is needed is optimisation can lead to bad results.

Improvement or optimisation?

LinkedIn and Bad Science

Bad Science

I’m more aware of bad science than ever since reading Ben Goldacre’s book of the same name a few years ago. Familiarity bias and proximity bias no doubt!.  So, another day, and another LinkedIn post from a well-meaning consultant using “science” to support their argument. This post links to a YouTube video about business leadership.  It makes a perfectly reasonable point about how much interconnectedness there is in the world. The author could support their argument using ideas from Systems Thinking, for example.  However, they invoke, and then mangle, the idea of entanglement from quantum mechanics; a favourite subject for purveyors of woo.

Quantum confusion

Entanglement states that the way sub-atomic particles interact means their quantum state cannot be described independently regardless of physical location. In other words, a change in the spin of one entangled particle will be replicated in the other no matter how many light years separate them.    Now, this might be OK if the entanglement of quantum theory was limited to an analogy, but we are urged to believe that this long-distance connection between sub-atomic particles explains some mystical connection between all humans.  Why? Well, according to the author it is because, of course, we are all made of atoms!

Logically, this is the equivalent of stating that humans must be born from clouds because we mostly consist of water and, you guessed it, water comes from clouds too!

Scientific method

The principles of the scientific method are to test hypotheses to find out if they stand up.  The assumption is that most hypotheses will be replaced by better ones as we learn more. Sometimes they withstand testing and experiment sufficiently well to become a theory that provides a robust explanation of something significant.  A good example is the theory of evolution.  Non-Scientists often misunderstand the meaning of the word theory in a scientific context.  The misunderstanding is given away by statements such as: “Evolution? It’s only a theory”.

Free speech is important so, by all means, everyone should be free to post any idea that crosses their mind.   However, if you don’t understand the science you want to cite, please remember the quote from Chris Morris’s excellent “Brass Eye” TV series: “…there’s no evidence for it, but it’s a scientific fact”.  At least then we’d know you were only joking even if you are unaware yourself.