Transferable Belief Model

Get Transferable Belief Model essential facts below. View Videos or join the Transferable Belief Model discussion. Add Transferable Belief Model to your PopFlock.com topic list for future reference or share this resource on social media.

## Zadeh's example in TBM context

## Formal definition

### Credal level

### Pignistic level

## Open world example

## See also

## Notes

## References

## External links

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Transferable Belief Model

This article provides insufficient context for those unfamiliar with the subject. (May 2016) (Learn how and when to remove this template message) |

The **transferable belief model (TBM)** is an elaboration on the Dempster-Shafer theory (DST) of evidence developed by Philippe Smets who proposed his approach as a response to Zadeh's example against Dempster's rule of combination. In contrast to the original DST the TBM propagates the open-world assumption that relaxes the assumption that all possible outcomes are known. Under the open world assumption Dempster's rule of combination is adapted such that there is no normalization. The underlying idea is that the probability mass pertaining to the empty set is taken to indicate an unexpected outcome, e.g. the belief in a hypothesis outside the frame of discernment. This adaptation violates the probabilistic character of the original DST and also Bayesian inference. Therefore, the authors substituted notation such as *probability masses* and *probability update* with terms such as *degrees of belief* and *transfer* giving rise to the name of the method: The *transferable belief model*.^{[1]}^{[2]}

Lofti Zadeh describes an information fusion problem.^{[3]} A patient has an illness that can be caused by three different factors *A*, *B* or *C*. Doctor 1 says that the patient's illness is very likely to be caused by A (very likely, meaning probability *p* = 0.95), but *B* is also possible but not likely (*p* = 0.05). Doctor 2 says that the cause is very likely *C* (*p* = 0.95), but *B* is also possible but not likely (*p* = 0.05). How is one to make one's own opinion from this?

Bayesian updating the first opinion with the second (or the other way round) implies certainty that the cause is *B*. Dempster's rule of combination lead to the same result. This can be seen as paradoxical, since although the two doctors point at different causes, *A* and *C*, they both agree that *B* is not likely. (For this reason the standard Bayesian approach is to adopt Cromwell's rule and avoid the use of 0 or 1 as probabilities.)

The TBM describes beliefs at two levels:^{[4]}

- a
*credal level*where*beliefs*are entertained and quantified by belief functions, - a
*pignistic level*where*beliefs*can be used to make decisions and are quantified by probability functions.

According to the DST, a probability mass function is defined such that:^{[1]}

with

where the power set contains all possible subsets of the frame of discernment . In contrast to the DST the mass allocated to the empty set is not required to be zero, and hence generally holds true. The underlying idea is that the frame of discernment is not necessarily exhaustive, and thus belief allocated to a proposition , is in fact allocated to where is the set of unknown outcomes. Consequently, the combination rule underlying the TBM corresponds to Dempster's rule of combination, except the normalization that grants . Hence, in the TBM any two independent functions and are combined to a single function by:^{[5]}

where

In the TBM the *degree of belief* in a hypothesis is defined by a function:^{[1]}

with

When a decision must be made the *credal beliefs* are transferred to pignistic probabilities by:^{[4]}

where denote the atoms (also denoted as singletons)^{[6]} and the number of atoms that appear in . Hence, probability masses are equally distributed among the atoms of A.
This strategy corresponds to the principle of insufficient reason (also denoted as principle of maximum entropy) according to which an *unknown* distribution most probably corresponds to a uniform distribution.

In the TBM *pignistic probability functions* are described by functions . Such a function satisfies the probability axioms:^{[4]}

with

Philip Smets introduced them as *pignistic* to stress the fact that those probability functions are based on incomplete data, whose only purpose is a forced decision, e.g. to place a bet. This is in contrast to the *credal beliefs* described above, whose purpose is representing the actual *belief*.^{[1]}

When tossing a coin one usually assumes that Head or Tail will occur, so that . The open-world assumption is that the coin can be stolen in mid-air, disappear, break apart or otherwise fall sideways so that neither Head nor Tail occurs, so that the power set of {Head,Tail} is considered and there is a decomposition of the overall probability (i.e. 1) of the following form:

This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. (June 2010) (Learn how and when to remove this template message) |

- ^
^{a}^{b}^{c}^{d}Ph, Smets (1990). "The combination of evidence in the transferable belief model".*IEEE Transactions on Pattern Analysis and Machine Intelligence*.**12**(5): 447-458. CiteSeerX 10.1.1.377.5969. doi:10.1109/34.55104. **^**Dempster, A.P. (2007). "The Dempster-Shafer calculus for statisticians".*International Journal of Approximate Reasoning*.**48**(2): 365-377. doi:10.1016/j.ijar.2007.03.004.**^**Zadeh, A., L., (1984) "Review of shafer's a mathematical theory of evidence". AI Magazine, 5(3).- ^
^{a}^{b}^{c}Smets, Ph.; Kennes, R. (1994). "The transferable belief model".*Artificial Intelligence*.**66**(2): 191-234. doi:10.1016/0004-3702(94)90026-4. **^**Haenni, R. (2006). "Uncover Dempster's Rule Where It Is Hidden" in: Proceedings of the 9th International Conference on Information Fusion (FUSION 2006), Florence, Italy, 2006.**^**Shafer, Glenn (1976). "A Mathematical Theory of Evidence", Princeton University Press, ISBN 0-608-02508-9

- Smets Ph. (1988) "Belief function". In:
*Non Standard Logics for Automated Reasoning*, ed. Smets Ph., Mamdani A, Dubois D. and Prade H. Academic Press, London - Ph, Smets (1990). "The combination of evidence in the transferable belief model".
*IEEE Transactions on Pattern Analysis and Machine Intelligence*.**12**(5): 447-458. CiteSeerX 10.1.1.377.5969. doi:10.1109/34.55104. - Smets Ph. (1993) "An axiomatic justification for the use of belief function to quantify beliefs", IJCAI'93 (Inter. Joint Conf. on AI), Chambery, 598–603
- Smets, Ph.; Kennes, R. (1994). "The transferable belief model".
*Artificial Intelligence*.**66**(2): 191-234. doi:10.1016/0004-3702(94)90026-4. - Smets Ph. and Kruse R.(1995) "The transferable belief model for belief representation" In: Smets and Motro A. (eds.)
*Uncertainty Management in Information Systems: from Needs to solutions*. Kluwer, Boston - Haenni, R. (2006). "Uncover Dempster's Rule Where It Is Hidden" in: Proceedings of the 9th International Conference on Information Fusion (FUSION 2006), Florence, Italy, 2006.
- Ramasso, E., Rombaut, M., Pellerin D. (2007) "Forward-Backward-Viterbi procedures in the Transferable Belief Model for state sequence analysis using belief functions",
*ECSQARU, Hammamet : Tunisie (2007)*. - Touil, K.; Zribi, M.; Benjelloun, M. (2007). "Application of transferable belief model to navigation system".
*Integrated Computer-Aided Engineering*.**14**(1): 93-105. doi:10.3233/ICA-2007-14108. - Dempster, A.P. (2007). "The Dempster-Shafer calculus for statisticians".
*International Journal of Approximate Reasoning*.**48**(2): 365-377. doi:10.1016/j.ijar.2007.03.004.

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Popular Products

Music Scenes

Popular Artists