LONDON – Two alternative approaches dominate current discussions about banking reform: break-up and regulation. The debate goes back to the early days of US President Franklin D. Roosevelt’s “New Deal,” which pitted “trust-busters” against regulators.
In banking, the trust-busters won the day with the Glass-Steagall Act of 1933, which divorced commercial banking from investment banking and guaranteed bank deposits. With the gradual dismantling of Glass-Steagall, and its final repeal in 1999, bankers triumphed over both the busters and the regulators, while maintaining deposit insurance for the commercial banks. It was this largely unregulated system that came crashing down in 2008, with global repercussions.
At the core of preventing another banking crash is solving the problem of moral hazard – the likelihood that a risk-taker who is insured against loss will take more risks. In most countries, if a bank in which I place my money goes bust, the government, not the bank, compensates me. Additionally, the central bank acts as “lender of last resort” to commercial banks considered “too big to fail.” As a result, banks enjoying deposit insurance and access to central bank funds are free to gamble with their depositors’ money; they are “banks with casinos attached to them” in the words of John Kay.
The danger unleashed by sweeping away the Glass-Steagall barrier to moral hazard became clear after Lehman Brothers was allowed to fail in September 2008. Bail-out facilities were then extended ad hoc to investment banks, mortgage providers, and big insurers like AIG, protecting managers, creditors, and stock-holders against loss. (Goldman Sachs became eligible for subsidized Fed loans by turning itself into a holding company). The main part of the banking system was able to take risks without having to foot the bill for failure. Public anger apart, such a system is untenable.