When building a Bayesian belief network, usually a large number of probabilities have to be assessed by experts in the domain of application. Experience shows that experts are often reluctant to assess all probabilities required, feeling that they are unable to give assessments with a high level of accuracy. We argue that the elicitation of probabilities from experts can be supported to a large extent by iteratively performing sensitivity analyses of the belief network in the making, starting with rough, initial assessments. Since it gives insight into which probabilities require a high level of accuracy and which do not, performing a sensitivity analysis allows for focusing further elicitation efforts. We propose an elicitation procedure in which, alternately, sensitivity analyses are performed and probability assessments refined, until satisfactory behaviour of the belief network is obtained, until the costs of further elicitation outweigh the benefits of higher accuracy or until higher accuracy can no longer be attained due to lack of knowledge.