Memoirs of a Citibank technology warrior - Lessons learnt from the “Global Go to Common” project (Part 4 of 4)
There is much to learn from Ajit Kanagasundram, former managing director, technology, Asia Pacific, at Citibank, who built the bank's global card processing capability from Singapore. This four-part series will reveal his insights into the people processes and decisions made over the years that provide considerable lessons for generations to come.
- Citibank's Global Go to Common project, which costs $ 1.4 billion, was shut down after five years of increasing complexity, escalating delays and costs
- The project failed because it sought a single code base in a complex system across multiple geographies, complex and long requirements gathering
- The Global Go to Common project ignored that the bank's back-end technology platform was already well standardised
This is the last of four parts. Click here for part one of this article, How the bank built the CORE; Part two: The global cards project; and part three: The “Systematics” project. In this part, Kanagasundram shares details of the expansive disaster of the global Go to Common project where over $1 billion was spent without achieving the required results and had to be cancelled.
By 2008 the Citibank International Consumer platform was in good shape. There was a project called Eclipse to convert the banking customer service platform to the Cards Workstation architecture, which was superior technically so that there would be a common customer interface. Then DK Sharma, the Asia technology head, launched the even grander Rainbow project. DK Sharma and Mark Terkos, the global technology head, decided to expand and roll out this program globally, including the very different North American business. This was a scope creep of huge proportions.
This is a complex multi-layered suite of applications consisting of hundreds of millions of lines of code. What was promised was a single version of all modules covering all the countries in the world including the US.
They also promised enormous savings in operations processing, technology maintenance expenses, instantaneous roll out of new projects globally (never mind the business rationale), and unspecified new business revenue - $4 billion over ten years! The cost of the Global Go to Common project was equally gigantic – $1.4 billion consisting of multiple projects and major expenditure proposal (MEPs) for all regions. Ignored was the fact that the Citi back-end technology platform was already well standardised and it only required successive releases to continue the convergence. Also overlooked was the obvious fact that the North American business was very different to the upscale Asian franchise.
The results were inevitable and predictable.
After global kick-off meetings accompanied by a reiteration that Citi was on the path to technology Nirvana the project ground down to interminably long and complex requirements gathering sessions dominated by US participants. The old simple accounting method of directly charging each business for the changes made for it was superseded by taking all expenses centrally and allocating a “head office tax” on each business – this naturally removed all constraints and the businesses asked for the moon since they know they would in any case be allocated a pre-determined sum. All relationship between functional value and cost was lost.
The number of staff employed and the head office bureaucracy ballooned with a cast of thousands in Singapore and India –the only real beneficiaries of this monster project in the end were the Indian software housed supplying contract technical staff like Tata Consultancy Services and WIPRO. They made a fortune.
In the past I had a simple technology organisation – a relationship management team of business savvy technical staff, who oversaw requirements gathering, acted as domain experts and standardising requirements between countries and regions and were in charge of testing. The technical team was organised according to functional expertise – authorisations, fees and interest etc. who handled all the technical work. Simple but effective, cutting down on the number of internal interfaces and communications. Now the teams were split up into numerous stand-alone units -requirements gatherers, business analysts, functional designers, coders , testing teams, deployment teams etc. stationed in the US, Singapore and India. The internal communications and interfaces ballooned, endless staff meetings across time zones to co-ordinate the teams and many layers of management and complex reporting structures made progress painfully slow.
The results were not difficult to predict – after five years of continuously increasing complexity accompanied by escalating delays and costs and only partial implementation the project was finally shut down by Stephen Bird (who was by then the head of the global consumer bank) in 2015. The $1.4 billion had by now been spent and a further $700 million was authorised but fortunately, Stephen diverted these funds to mobile and cloud applications. He was prompted by complaints from the profitable Asia franchise that they saw no value in the project and their urgent requirements had been frozen for years because of this project.
Why Global Go to Common failed
I will now spell out why the whole project was unnecessary and why it failed as it is a salutary lesson in technology governance:
- The main reason given for having a single code base in a complex system across multiple geographies is that they cut down on maintenance, an enable faster rollout of enhancements. I emphasise the word single, which was the promise in Global Go To Common. In fact the exact opposite is true. This is not intuitively obvious, but is well recognised by experienced technology managers for the reasons given below.
- The twin objectives of cheaper maintenance and faster rollout can be achieved by having a common architecture and technology platform and reasonable standardisation of – say 90%. This is within the span of control of technology managers and the necessary processes like requirements gathering are less cumbersome. The truth of what I have said is proved by the fact that the fraud system for cards – FEWS – was rolled out globally within three weeks even though the base for ECS+ was VisionPlus in Europe and Latam and CardPac in Asia and the Middle Est. This was way back in 2002. You only need to keep the interfaces standard and have common data definitions. Maintenance is cheaper as well as the code base is less complex – you only need to have a single team.
- Gathering requirements is impossibly long and complex between businesses. Once you have gathered the requirements and go for future enhancements the problem only gets worse – given the fact that only a finite number of changes can be accommodated between releases how do you arbitrage between the requirements of different countries. The result is usually a cumbersome and unnecessary bureaucracy that only delays the process.
- Testing becomes increasingly time consuming. To maintain a single code base every time any change is made anywhere, it must be tested by every other business globally before it can be incorporated in the single code base. This requirement alone would have sunk Global Go To Common, even if it had seen the light of day.
- The code base becomes increasingly complex, cumbersome and inefficient with endless “IF-THEN-ELSE” subroutines.
- There is such a thing as span of control in management and such a big complex project is beyond the ability of management.
It is important to learn the lessons of this costly failure, a repetition on a much larger scale of the failure of CBS 35 years ago, as in the words of the great historian Lord Macaulay – “those who forget the lessons of history are condemned to repeat them”.
Ajit Kanagasundram is formerly managing director, technology, Asia Pacific at Citibank. The views expressed herein are strictly of the author.
Keywords: Citibank, Technology, Global Go To Common, Consumer Banking