Hear me out, but I’ve been thinking for a while that these surveys given to the general public are not very reliable, I will provide some reasoning below:
Customers are interviewed as a fixed number, not a % of the customer base. This will inevitably benefit neobanks with small, invested customer bases and negatively affect the rankings of the likes of banking groups like Lloyds and HSBC
Virgin Money, Clydesdale & Yorkshire Bank are all recorded as Virgin Money*, whereas Lloyd’s Group who all functionally offer very similar offering, as well as NatWest Group, are recorded differently. This lets us see disparity with ratings across Lloyd’s Group and NatWest Group banks; these are within margin of error, but are still very weird to see especially since RBS/NatWest offer identical products
There is no assumption of what is good or bad, just what satisfaction is; this means that for example someone who’s parents set them up a Nationwide account may think card reader is commonplace and thus be satisfied with this. But their perspective would change after being with another bank, as a result, these statistics again really don’t mean anything as a potential customer
According to YCombinator, startups typically focus on customer service with a much higher degree as it lets them churn feedback, with the view to “do things that don’t scale” which could potentially leave consumers with the idea that service is great now and will remain this way, when it’s unlikely to scale up
Banks with harsher lending criteria like Monzo are a lot liklier to have higher satisfaction than a massive retail lender (Lloyd’s or NatWest Group) as the customers encountered in the survey are a lot less likely to be in revolving debt, being typically better off
The 1000 person number is simply arbitrary and although meant to represent the customer base, I’m unsure as to how this would be accurately enforced. To prove it’s arbitrary, the number is 1000 in Great Britain (England, Scotland, Wales), with an estimated population of 64.9m vs the number of 500 in Northern Ireland, with a population of 1.89m.
I had more thoughts but I seem to have forgot them, but I believe we could definitely abolish this just to save some money on producing these useless statistics for consumers.
*This is supposedly due to the closing of the prior two brands, with them becoming part of Virgin Money
This rings true as I think we’re already seeing this, especially with the mess around Monzo’s chat and the decision to take away 24 hour support.
I think your point about the inherent subjectivity of asking people to rate their own satisfaction, and some customer profiles (based on their demographics, etc) being likely to take that differently, is also right on the money.
The other thing that really feels odd is the final results, usually, produce a difference of a few percentage points between all of the top 10, and only about 10-15% between the top bank and the bottom one. Surely service is not really so similar amongst all banks?
I honestly think it’s probably roughly 50% at least. For example Starling closed my chat several times for zero reason; RBS meanwhile is currently ordering me a chequebook over live chat because I don’t like doing phone calls, using my online banking PIN as verification very friendly too and I appreciate the gesture that they said they’ll follow up with me in a week to see my account is running great
Edit: all ordered, Paying in Slips and Chequebook on the way for the Child & Co account
I suspect Monzo, Starling, First Direct, and maybe Nationwide are likely to benefit from bias. I think customers of these banks are more likely than customers of other banks to have chosen their bank for a specific reason (e.g. app quality, phone service, building society roots) and these customers are more likely to respond positively to surveys to validate their choice.
Customers of other banks more likely to have become customers without much thought (e.g. most local bank) and will have a much more meh attitude which world translate to mediocre survey scores.
However the above really applies to any survey, not just the IPSOS one.
On the IPSOS survey I personally like the way they pose the questions - what is the likelihood you would recommend a product/feature to family and friends. I think it’s a more straightforward question than you sometimes get with these types of survey. I think everyone can confidently answer.
Would be interesting to see results split by age though - perhaps 18-40, 40-65, 65+
If you restrict these surveys only to people who have tried every single bank then they’re not going to happen. I also don’t think lack of exposure to other banks invalidates a recommendation. I can recommend something without having sampled everything else.
Consider an example from another topic. If I go on a trip to another country and have a good time I can recommend it to others. There’s nothing invalid about my recommendation just because I haven’t been to every country in the world. What I can’t reasonably do is claim that country is the best place to visit in the whole world.
It’s the surveys that ask people what they think is the best bank that I think are the nonsense ones.
Now you are almost raising another point, related to product line.
Starling has one current account, so you know that everyone is reviewing the same thing. Nationwide has multiple current accounts. It’s possible that FlexPlus users think they are great (due to the insurance being good value) while FlexAccount users consider them average. All these users are aggregated under the scoring system, so the discrepancies in satisfaction of different parts of the customer base are hidden.