The point is right - base the test approach on the identified risks. There are numerous problems in that approach. A few mentioned below.
The "stakeholders" who should be able to list and prioritise risks hardly know what they've bought or ordered. Most organisations or customers have little idea about what's going to hit them when they embark on projects where risk based testing could be of good use. They are about to participate in the expectation roller coaster ride where each day will bring new challenges and decisions forward and on top of that they have to prepare their own organisation and their customers for the changes they've ordered. They are not used to working with the fluffy term of "risks" and being the test manager trying to facilitate them is no easy task.
The professional testers
This group of project participants have to be kept on a short leash when it comes to risks. Depending on their interest, experience and beliefs they can carpet bomb any risk session in a matter of minutes. Most of them with very relevant input, but too often it is input that is way outside what the project is capable of dealing with - even with lots of resources (time, hands and money).
That said, testers are usually your best chance of success when it comes to qualified risk input based on their experience but put on your angry test manager hat and cut away any risks that are improbable or impossible to deal with.
Development and service organisation
This might be your chance of some significant input to the risk list. Understand the changes that are being implemented. What's being reused, what's the new development, who's the experienced crew, who's new. New and totally uncharted territory vs. known land. That's something you can get from the developers. Not on a silver plate but they might have an idea early in the project about where and what will be impacted. Architects are a similar good source for this kind of input. Although they tend to be a bit further into the universe.
The service organisation - well, they knew where it failed last time, and the time before that. Service delivery managers are usually the best source for historic insight into failures that should have been addressed by the testing effort. If they have kept record of their findings (yes, it could be recorded systematically in a database), they can even quantify and group this for you.
And if you, as the responsible test manager, is not able to get useful input from these critical parts of the project organisation, you should consider your position or be prepared for a "memorable project experience".
This might also be a good place to get some input on risks affecting the testing. Not in terms of tangible risks that can be listed and prioritised, but rather in terms of the managements understanding of which risks that are worth focusing on. For one this will give an indication to the test manager whether there is an alignment between the test organisation and (project) managements understanding of risks. If not - then that is the primary risk to address - and stop testing while priorities are being sorted out.
In fact this is where you start. Get an agreed risk definition for the project which is agreed with the management - be it project management or a project owner or sponsor. Then you know where to aim, and where to prioritise as a test manager.
For the record - this blog is not dead, it just took at 14 week vacation. Happy testing. We're back in action.