London Investment Banking Assessment Centres: Exercises, Scoring, and Prep Guide

London IB Assessment Centre: What to Expect and Prep

An assessment centre is a bank-run final-round hiring day where you’re observed doing simulated first-year investment banking work. In London IB, it usually means a tight sequence of case analysis, group decision-making, writing, and interviews, scored by multiple assessors against a rubric.

That definition matters because candidates often prepare for the wrong thing. This is not a generic graduate event where personality carries the day. It’s closer to an investment committee drill: you get imperfect data, limited time, and someone senior asking, “What’s the call, what’s the support, and what could go wrong?”

London assessment centres show up most often in Analyst and Associate hiring across M&A, industry coverage, leveraged finance, and capital markets. The bank uses them to reduce hiring risk after first-round screens. A CV and one interview can’t prove you’ll deliver clean work product at speed, take feedback without drama, and keep a team moving.

There’s a simple asymmetry at play. One poor hire costs months of rework, missed deadlines, and team friction. One missed “genius” candidate rarely breaks the franchise. So assessors lean toward avoiding obvious risk. Your job is to be reliably scorable by people who have incomplete context and not much time.

What the assessment centre is really testing (and why it matters)

An assessment centre is not a pure intelligence test and not a pure technical exam. Banks score outputs and behaviors because the role demands both. Analysts and Associates are paid to produce numbers that tie out, and to do it in a way that helps the team ship the work.

If you’re technically strong but hard to work with, you raise the cost of supervision. Dominating airtime, ignoring instructions, or turning every discussion into a contest makes you expensive to manage. Many assessors will pass on that risk even if your valuation is decent. Conversely, if you’re polished but your numbers are sloppy, you fail the trust test because no one wants to explain to a VP why the model breaks.

Stakeholder incentives explain the structure. HR wants a defensible process with consistent scoring and lower bias risk. The business wants someone who can survive the first six months without constant rescue. Individual assessors want to avoid sponsoring a candidate who later underperforms, so they overweight red flags that are easy to justify in a debrief.

Fresh angle: treat it like an “audit trail” exercise

A useful way to think about a London IB assessment centre is that you’re building an audit trail under time pressure. In real deals, seniors care less about whether you “sound smart” and more about whether your work is checkable. If an assumption changes, can someone see where it lives? If a number is challenged, can you trace it back to the exhibit? If you make a judgment call, can you state the trade-off?

This mindset changes how you prepare. Instead of memorizing niche formulas, you practice making your reasoning legible: clear structure, explicit assumptions, and a final output that someone else could use immediately.

The typical London IB exercise set (and what wins each one)

Most London assessment centres bundle two to five components. The mix varies by bank and team, but the center of gravity is always banking work product: a concise recommendation supported by clean numbers and defensible assumptions. Some firms add numerical tests or situational judgment modules, but they rarely outweigh the case, group exercise, and interviews.

1) Financial analysis or valuation case (often Excel-lite)

Case formats range from a paper pack with exhibits to a laptop exercise with a template. Time is tight enough that perfect completion isn’t the goal, so prioritization becomes part of the test. The output is usually a recommendation: pursue or pass, valuation range, financing structure, or the diligence questions that matter.

Assessors watch for numerical accuracy under time pressure, basic modeling hygiene, and your ability to translate exhibits into drivers. They also watch what you ignore. In the job, you will constantly trade completeness for timeliness, and your judgment shows up in where you spend minutes.

It’s rarely a full three-statement build from scratch. More often it’s, “Can you avoid breaking a simple model, and can you explain assumptions without hand-waving?” Simple and correct beats complex and fragile.

2) Group exercise (deal problem solving)

You’re given a prompt and asked to deliver a shared output. Typical prompts include ranking acquisition targets, advising a board on options, outlining a debt package, or responding to a crisis like a profit warning.

This is a governance and conduct test. The bank wants to see whether you can frame a problem, allocate work, integrate viewpoints, and reach a decision without turning the room into a mess. You don’t score by talking the most. You score by improving the group’s throughput and the quality of the final answer.

3) Presentation (individual or group)

Often the presentation follows the case or group work. Sometimes you get slides; sometimes it’s flipchart or a one-page handout. Bankers will push back because that’s the job in miniature: someone asks why your assumption holds, and you either defend it or you fold.

Strong candidates lead with the recommendation, support it with two to three reasons, and use numbers as evidence rather than decoration. They don’t drown in detail, and they don’t overstate certainty. Instead, they hold a firm view with clear conditions.

4) Written exercise (email, memo, or one-pager)

You might write an email to a senior banker, a client, or an internal committee. The prompt usually includes competing priorities and incomplete information. The bank is asking, “Can we put your words in front of a client with minimal editing?”

Writing is a separator because many candidates can talk through a case but can’t produce a crisp written output at speed. A messy email signals revision burden, and revision burden is real cost.

5) Competency and technical interviews

These interviews tend to be more structured than earlier rounds. Expect core accounting, valuation, and commercial judgment questions, plus motivation and credibility checks. For laterals, deal experience will be scrutinized for substance and your exact role.

Interviews often act as the consistency check. A strong centre performance can still be undone by weak fundamentals or a motivation story that doesn’t hold together.

How scoring works in practice (and how to be “reliably scorable”)

Banks don’t publish the scoring model, but the constraints are visible. Scoring has to be fast, comparable across candidates, and defensible. That pushes banks toward anchored behaviors and threshold criteria.

Many firms use competency frameworks with observable markers such as “structures the problem,” “prioritizes,” “tests assumptions,” “influences without dominating,” and “communicates concisely.” This creates a practical rule: assessors are not only grading your answer, they’re grading whether your approach is legible against the rubric.

Most centres function like a two-stage filter. First, you must clear minimum thresholds across multiple domains. Then candidates who clear thresholds get ranked for offers. Threshold failures usually come from a small set of causes: careless arithmetic, ignoring instructions, poor time control, abrasive group behavior, rambling communication, or invented facts.

Weighting varies, but case output and presentation often carry the most for analytical roles. Group behavior often carries more than candidates expect because culture risk is expensive. Writing matters when the team needs client-ready materials quickly. Interviews matter when they uncover inconsistencies.

Assessors use blunt heuristics because they’re busy. “Would I trust this person to send a live email next week?” “Would I staff them with a VP and not worry?” “Do they make the team faster or slower?” If you keep those questions in mind, your choices get simpler.

Execution that travels across exercises

Performance consistency is what separates offers from near-misses. You can be average in one component and still win if you’re steady across the day. The goal is to run a repeatable process that produces clean, defensible outputs.

Case work: use a repeatable flow

Most cases give a company overview, abbreviated financials, market data, and a prompt. The fastest way to fail is to solve a different problem than the one asked.

A reliable flow starts by restating the question in one sentence, naming the decision and the decision-maker. “We need to advise whether the client should pursue TargetCo and, if yes, at what valuation range.” That single line keeps you from drifting.

Next, sketch a quick issue tree: business quality, valuation, structure or financing, and key risks with mitigants. You don’t need fancy charts, you need a map that keeps your work organized.

Then extract the minimum viable numbers: revenue, EBITDA, margins, growth, leverage, cash conversion, and obvious red flags like working capital spikes. If you can’t reconcile the basics, no multiple will save you. For common failure patterns, review three-statement model logic errors and what they look like under time pressure.

Choose the simplest valuation tool that fits the data and time. Trading comps and a simple sensitivity often beat a half-built DCF. If you do run a DCF, keep it clean and run checks like the ones in a DCF model checklist. If you need a fast template approach, a simple Excel DCF is usually closer to what assessment centres expect than an overbuilt model.

Triangulate and sanity-check. If your implied multiple is far outside the comp set, say why. If it’s an error, find it. If it’s a real difference, tie it back to growth, margins, or risk.

Finish with a recommendation that has conditions. “Proceed at £X to £Y EV, subject to confirming churn, normalized working capital, and debt capacity.” That sounds like banking because it is banking.

Time is usually the binding constraint. Strong candidates reserve the last five to ten minutes to craft a clean output. Weak candidates spend all their time building a fragile model and then can’t explain what it means.

Numerical hygiene is non-negotiable. Label units, keep assumptions visible, and avoid accidental hardcodes. If you’re given a template, don’t reformat it aggressively. Broken formatting, circular references, and inconsistent units are quiet ways to fail.

Group exercise: steer without taking over

The first minute matters because it sets the working rhythm. Propose a structure: clarify the objective, agree criteria, split analysis, then reconvene for a decision. Offer roles lightly such as timekeeper, scribe, and presenter so the work moves.

Use a simple scoring grid if you’re ranking options. Make criteria explicit even if subjective: strategic fit, valuation, financing feasibility, execution risk, and regulatory risk. A visible framework reduces arguments and speeds convergence.

Invite quieter participants early. “We haven’t heard from X, any risks we’re missing?” That shows you’re managing the room, not managing your ego. Summarize often because it keeps alignment and creates observable leadership for assessors.

When disagreement appears, convert it into a test. “We disagree on integration risk and financing capacity, let’s pressure-test both.” That’s what bankers do: turn opinion into diligence.

Avoid two extremes: passive silence and aggressive control. The sweet spot is moving the group to an answer while making others better.

Presentation: lead with the call, then earn it

A practical structure rarely fails: recommendation first, then two to three reasons, then valuation or quantitative support, then risks and mitigants, then next steps and open questions.

When challenged, answer in three moves. First, restate the question. Second, cite the best available evidence from the case. Third, state what you would test with more time or data. Do not invent facts. If you don’t know, say so and propose the diligence step. Credibility compounds, and bluffing breaks trust in one sentence.

Written task: use email discipline

Written tasks reward structure more than style. Use a subject line that matches the ask, a one-sentence context line, three bullets with key facts or analysis, a clear recommendation or decision needed, and explicit next steps with owners and timing.

  • Subject clarity: Mirror the prompt so the reader instantly knows what you want.
  • Three-bullet core: Lead with the most decision-relevant numbers and constraints.
  • Decision request: Ask for approval, a choice, or guidance, not “thoughts.”
  • Next steps: Assign owners and deadlines to show execution thinking.

Avoid long paragraphs and avoid jargon unless it saves words and the audience will understand it. If you cite numbers, include units and periods. If there’s a risk that affects advice or timing, surface it plainly.

Preparation that reduces variance (not just knowledge)

The objective isn’t to learn everything. It’s to eliminate predictable mistakes and build a repeatable operating model under time pressure.

Technical floor: build a minimum viable toolkit

For London IB centres, the technical baseline usually includes the accounting links between the three statements, enterprise value versus equity value, and standard valuation methods (trading comps, precedents, and DCF mechanics at a high level). You should also be comfortable with leverage and credit metrics like net debt and interest coverage, and have basic M&A and LBO intuition even if you won’t build a full model. If you want a targeted workflow, an analyst modeling toolkit can help you standardize your approach.

If you’re rusty, choose accuracy over breadth. Candidates fail on sign errors and definition gaps more often than on advanced theory.

Case practice: train constraints, not perfection

Most candidates practice too slowly and too comfortably. The centre rewards speed, structure, and clean output under ambiguity.

A workable routine is two timed cases per week for three weeks, each ending with a one-page recommendation. After each case, identify your biggest numerical error, your weakest assumption, and one communication change that would make the output more usable. Add one group rehearsal per week with peers and rotate roles so you don’t only practice your favorite seat.

Behavioral prep: bring evidence, not slogans

Competency interviews reward specific evidence. Keep a small set of stories that flex across prompts: ownership under pressure, conflict resolution, catching and fixing an error, and making a prioritization call with limited time.

Use concrete actions and outcomes. “I led a team” is not evidence. “I rebuilt the model, found a unit mismatch, and prevented a client-facing mistake before the 8 a.m. send” is evidence.

Market awareness: be current and cautious

Commercial awareness is usually scored on realism, not bold predictions. Anchor to rates and credit conditions, sector trends relevant to the bank, and a few recent deals only if you can explain the rationale, valuation logic, and the main risks.

Use conditional language. “If rates stay elevated, refinancing risk rises for levered borrowers” shows judgment and avoids sounding like a forecaster.

The quick failure screens (and how to avoid them)

Assessors often run implicit “kill tests,” so your job is to remove easy reasons to say no. You can do that by focusing on accuracy, instruction-following, and professional conduct.

  • Numerical trust: Don’t mix thousands and millions, confuse EBITDA with operating profit, or miss negative signs.
  • Instruction compliance: If asked for a recommendation, don’t deliver a data dump.
  • Time control: Allocate time up front and protect final minutes for the conclusion.
  • Group conduct: Don’t interrupt or dismiss ideas; use questions and summaries.
  • Integrity under gaps: State assumptions and how you’d verify them instead of inventing facts.
  • Client-ready writing: Keep emails structured so they require minimal editing.

Closeout: lock in your work like a professional

Closeout matters because it signals how you will operate on a real deal. At the end of the day, keep your materials organized. Archive your notes by exercise (index, versions, Q&A, who said what, and any feedback), then hash your final outputs so you can confirm what you sent and when. Retain them for a sensible period for your own review, then delete vendor-stored files where applicable and obtain a deletion or destruction confirmation.

If a legal hold applies due to university, employer, or bank policy, it overrides deletion and you follow the hold. This is a small detail, but it’s a credible tell: regulated environments reward candidates who treat information as controlled, not casual.

Key Takeaway

A London investment banking assessment centre rewards candidates who produce checkable work product and who make teams faster, not louder. If you build an “audit trail” mindset – clear structure, clean numbers, explicit assumptions, and professional communication – you become easy to score and hard to reject.

Sources

Scroll to Top