How should advisers rate the ratings agencies?

matrix

The Retail Distribution Review has enhanced the power and reach of fund rating agencies, as advisers have realised that they may not be able to offer fully compliant investment selection in-house. But these offerings can blur into a haze of crowns and stars and capital letters: how can advisers determine which ratings offer value?

In practice, though many would tell you otherwise, the process of fund ratings does not differ significantly from group to group. Practically, with 3,500 funds in the UK and 40,000 across Europe, ratings groups need a way of isolating those likely to give a stronger return. Mostly this is through quantitative screens, which increasingly incorporate risk metrics such as volatility and alpha, alongside bald performance measures.

Some ratings are based on quantitative screens alone: Morningstar star-rating, FE Crown ratings and Citywire Fund Manager rating – though most make it clear they are not predictive (the regulators make sure of it), but merely provide some guide as to how a manager or fund has fared in the past.

The next stage is a qualitative screen, where ratings groups see the whites of a manager’s eyes, look at their support network, ask them questions about their process. This is designed to be forward-looking, expressing a conviction on how likely a fund manager is to repeat any past success. Fund ratings will vary in their emphasis, the skill of their analysts and the type of ratings they assign (crowns, stars and so on), but the process is similar.

Nevertheless, ratings providers do differ on a number of key issues that advisers should bear in mind when selecting with whom to partner.

How rating agencies are paid

No-one would deny that rating agencies need to be paid. The analysis and ongoing monitoring of funds takes resources and greater resources can really make a difference to the quality and coverage provided by ratings groups. But how they are paid has been a source of considerable controversy.

The old model of ‘pay to play’ – whereby fund managers pay to have their funds rated – is largely defunct, thoroughly exposed during the financial crisis. This had all sorts of inherent problems, such as fund management groups only submitting those funds for ratings that it believed were likely to do well, and rating agencies seeming to favour ratings for those groups with the deepest pockets. It was not a model that served advisers well.

fs_quantitativedata

Now the biggest philosophical gap is between those who believe that fund ratings should not be taking any money at all from fund houses, and those that believe charging for selected services does not compromise their overall objectivity. Rory Maguire, co-founder and chief executive officer of Fundhouse, is firmly in the first camp: “Rating providers, for the most part, are subsidised by the fund managers (they sell them the marketing rights to use the rating logos) and so fund managers are their clients, big ones at that.  This way they can give the fund ratings away for free to advisers.”

Clearly this has the potential to introduce some biases – fund ratings are restricted to those that are able and willing to pay. That said, these are fund management groups rather than charities, so the cost of ratings is not usually an issue. However, the real problem is a straightforward business conflict: Can a group reliant on fund managers for its income genuinely give one a bad rating? Do it often enough and it may not have much of a business.

The ratings providers strenuously deny that any licensing agreements or similar with fund managers introduces any bias into their ratings: For example, Morningstar includes the ratings and reports in its software platforms for institutions, intermediaries and individuals and charges subscription fees. In addition, the ratings and reports are sold to clients within information feeds and as a stand-alone research service, some of which will be fund managers, but

Christopher Traulsen, director of fund research at Morningstar, says that it has “complete independence”. “If a fund is mediocre, we will say that. We will back our analysts to give a truly honest opinion.” The group is clear that it acts for the adviser and not for the fund manager.

Certainly, the group has a history of demonstrating its independence. For example, it was willing to risk the ire of fund managers when it introduced its ethical ratings for funds recently. Previously, fund managers had given little thought to the potential SRI score of their portfolios and those with poor scores feared losing investors.

In the end, it may come down to the number of strings any ratings agency has to its bow. Richard Romer-Lee, managing director at Square Mile, points out that while fund management groups are part of the group’s revenue stream, they are also paid by advisers: “We have a diverse client bank, who are looking for different things…We own this business and believe we have full alignment of interest with clients.” It is certainly true that many fund rating groups live and die by their credibility and a sniff of bias could see their business undone.

Maguire concludes: “A clear, unambiguous view is important, rather than a muffled one.  Also, when you look at the rating firm, see who they market – do they market fund groups openly?  This could be an indication that they have close relationships with fund groups, which may challenge objectivity.”

Whether rating agencies hold themselves to account

Are the raters rated? It is all very well devising what appears to be a good way to analyse and rate funds, but does it really help identify funds likely to outperform their peers and their benchmark over the long-term?

For many fund rating groups there is an important distinction between funds that are likely to perform well and those that are best of breed. The former depends on market conditions, while the latter is in their own gift. Traulsen says: “We are not making macroeconomic calls; we are not market timers. We are simply saying that if you want a UK small cap manager, this is a good one, rather than that UK small caps are likely to do well.”

031116_ratingsagencies

Morningstar gathers performance data for its ratings. It has found that its gold-rated funds have delivered an average of 1.34 per cent higher annual return over the category average over a rolling five-year period. Its negative-rated funds have a return 0.82 per cent below the category average.

While few of the other groups hold data as clearly or as publically as this, others do have processes in place. For example, the analysts at SquareMile report to the management team and board of the group every six month on the performance of the group’s Academy of Funds: “If we’re going to hold fund managers to account, we should be subject to the same scrutiny,” says Romer-Lee.

There is an important distinction between this type of process and simply checking on the funds progress every few months. Almost all rating services check on the funds and re-subject them to analysis to see if they still meet their criteria.

The regulatory position

Fund rating groups are, for the most part, not regulated when they provide support to advisers. They only need to be regulated if they are providing advice to the public, or to discretionary managers. For the adviser, the risk is that they have no regulatory fallback position if the rating is not done properly. This is a particular problem as it is clear that some advisers are using the rating agencies as the engine of their centralised investment process.

Advisers are now far more aware of the potential problem and are doing greater due diligence on ratings providers. Geoff Mills, director at Rayner Spencer Mills, says that some advisers will send in their due diligence people to conduct a full-scale audit on RSMR’s processes.

“The risk stays with the adviser for selection of funds. We do not know the details of the end investor. The adviser is the only person who has the conversation with the client and knows their risk tolerance, capacity for loss and so on. The regulator will want to look at the process the adviser goes through, and that the fund selection has had rigour applied.”

However, Maguire suggests that not enough is being done: “It is a real oddity of the market that fund ratings are not regulated. How this slips through regulation is hard to grasp. If you think about it logically, the fund manager and financial adviser are regulated. But the party that sits between the two – the fund rater isn’t. We (Fundhouse) are regulated for fund ratings, but I think we are in the minority on this. It only makes sense that the adviser to the adviser is regulated, otherwise it’s a flaw in the chain of advice, isn’t it?”

Romer-Lee says that SquareMile has a regulated business that is used for the group’s discretionary clients and the necessary culture runs across the business: “Clients expect this now anyway. As the regulators ensure that advisers do their due diligence, they expect to see their service providers run in a certain way.”

There is also the thornier question of whether regulation in this area might necessarily get investors to a better place, or whether it would simply become a tick-box exercise as it has done in other areas (such as fund disclaimers), without offering the client real protection.

Market coverage

Does size matter? Some would argue that investment portfolios can be built from a relatively small range of funds and an abundance of choice can be unwelcome. Nevertheless, most advisers would like to feel that there weren’t funds slipping through the net – small, new or exciting funds – or that they were confined to large, steady-Eddie funds from the big fund houses.

The larger players with more analysts probably have the edge here. Morningstar says: “It allows our analysts to base our coverage on where investors put their assets – we aim to cover up to 70 per cent of assets under management in each peer group, and we leave room to add smaller funds we think merit coverage.”

Jonathan Miller, director of manager research at the group, says: “There are over 40,000 funds in Europe. Realistically, an analyst can cover around 50 funds from a qualitative perspective. As a result, we cover around 1,100-1,200 funds and around 450 in the UK.” It also has a broad reach across other asset classes, such as investment trusts, and is about to launch a rating service for ETFs. This has its advantages – investors are familiar with one system, used across all ratings.

In practice, the other groups offer a similar level of coverage: RSMR rates around 350 funds in the UK; SquareMile around 220. All aim to give investors options in different asset classes.

031116_performancemeasure

Each will have their quirks about what they do and don’t rate. Morningstar, for example, doesn’t rate direct property funds, believing that the underlying valuation processes are opaque and the funds are subject to liquidity problems. Equally, it says it is careful not to get caught up in rating the latest ‘hot’ topic.

Some have size limits, while others are more flexible. Fund Calibre, for example, where the ratings are directed at retail investors, rates funds as small as the Mirabaud Europe ex UK Small and Medium Cap fund (£35m) or the River & Mercantile Equity Long Term Recovery fund (£23m).

Proprietary data

The fund management world has come a long way in terms of transparency. Ten years ago, extracting even the top 10 holdings from a fund group could prove tricky, with fund managers not wanting to give away their ‘secrets’. The investment trust industry has also been slow to open up. Most fund rating groups are clear that they have sufficient pulling power to extract good data from fund managers and be able to talk to them when they want, but it can be worth ensuring their ability to demand transparency and access from fund managers.

There may also be a difference on whether ratings providers take their attribution analysis from the companies, or whether they have their own systems. Groups such as Morningstar and FE use their own data and will usually have their own proprietary systems.

The view of a good fund

This is an important philosophical point, and differs substantially across the ratings providers. For some groups, it comes down to whether a fund beats its stated benchmark and how it chooses to do that is up to the fund manager.

Other ratings groups question the stated benchmark, believing there can be inherent problems with the benchmarks that fund managers set themselves. For example, a fund manager with an all cap benchmark, but a weight to smaller companies, could have looked good relative to its benchmark but may not be displaying real skill.

Miller at Morningstar says: “Sometimes groups will pick a benchmark because it is easier to market. However, we look at what a fund manager is trying to achieve and see if the two match up. What is the commonality? It is all about trying to identify skill versus luck.” Morningstar creates its own categorisation, believing this can give end investors a better sense of where the fund should sit in a portfolio.

Other groups aim to sort this out in different ways: For example, for Fund Calibre the value versus growth element would come out in the qualitative screen: “We are never saying that a fund is going to outperform, but more that we believe they will do the job well again.”

The view of a bad fund

McGuire says: “There are very few negative ratings and we continue to see reports saying that most UK and US equity managers underperform.  That is to say, they received a fund management fee for a service they did not deliver and we think this warrants the need for negative ratings – logically there should be an abundance of them if they do underperform regularly.  But, there aren’t negative ratings of meaningful number.”

However, it’s difficult for fund ratings groups to spend time trying to isolate negative ratings except on a purely quantitative screen. Some, such as SquareMile, will do consultancy work with clients on their existing holdings, which will include telling them which funds are weak.

Fund versus manager

This was an early area of controversy – should it be the fund that is rated or the manager? After all, the theory goes, it is usually the manager’s skill that delivers strong performance and fund managers change jobs frequently, necessitating frequent changing of ratings. Citywire was an early innovator in this, splicing together fund manager performance to build a picture of their overall performance over time. FE also has an ‘Alpha manager’ rating, which aims to identify the top 10 per cent of managers.

This has its merits as an approach, but fund managers may perform differently in different environments. Newton, for example, has been home to a number of apparently strong managers that have not replicated their success elsewhere. It seems likely that the strong analyst support available at the group flattered manager performance. Environment is important.

How should IFAs use ratings?  

McGuire sees three main uses for fund ratings. These include fund selection and the creation of buy lists, assuming a service level is in place with the fund ratings business.  Funds need to be selected specifically for their business and monitored on their behalf.  He believes it also helps with governance and regulation to have a true third party independent view on a fund.

McGuire says the relationship should be clearly structured: “(Advisers need to) send them questionnaires to complete and visit them and ask to sit in their fund manager interviews.  Then, agree a service level and start using the ratings in a way that is bespoke to their business and client needs.”

And finally, they need to be clear that there is a difference between a good rating and a good investment.  Maguire says: “We rate many bond funds well, but are cautioning clients around them at the moment.” Fund Calibre is aiming to identify those fund managers who will repeat the job they have done well to date, not those likely to outperform in the next quarter.

The risks

The key risk of using ratings badly is that clients are left with weak funds or are not alerted to problem funds. Some obvious examples might be whether rating agencies have properly evaluated bond or property liquidity and whether they have investigated complex funds such as GARS, Invesco GTR and other popular strategies. Ratings are there to protect clients. FundCalibre, for example, aims to make their analysis clear for retail investors.

McGuire concludes: “For us, the test is simple.  We have a single test – when doing any fund rating (say, writing a report or attending a fund manager interview), the person looking over our shoulder is a pensioner.  Whatever we do must stand up to this level of scrutiny.  We aren’t doing hotel ratings or rating a holiday.  There is a significant moral dimension to what we do and we think this test keeps us honest.”