Many clinical situations, as cancer research, can be described in terms
of the conditions that individuals can be in (‘‘states’’),
how they can move among such states (‘‘transitions’’),
and how likely such moves are (‘‘transition
probabilities’’). In these situations, State Transition Models (STMs) are often
well suited to the decision problem, as they conceptualize
it in terms of a set of states and transitions
among these states, An STM disease process should reflect the disease’s
natural history, expected prognostic pathways in the
absence of intervention, and treatment effects. The STMS should show comparing intermediate and long term clinical outcomes, as chronic diseases (diabetes) and cancer.The two STMs common frameworks used in pharmacoeconomics are: cohort, or ‘‘Markov models" and individual based models, commonly known as ‘‘first-order Monte Carlo’’ or ‘‘microsimulation’’ models. But, before choosing between cohort or individual level simulation, the characteristics of the population that must be carried through the model (i.e., state descriptors or tracker variables) must be specified.
An advantage of using an individual-level STM is the ability to model individual characteristics as continuous variables and to evaluate dynamic intervention strategies—ones in which future decisions depend on current and past patient characteristics.
Individual-level
STMs, however, require more computation
time, which may be important if probabilistic
sensitivity analyses or value-of-information analyses
are performed. Markov chain models are a natural approach to take when modeling the transitions of patients between discrete health states over time, for example, the progression over stages of a disease. Markov models represent disease processes that evolve over time and are suited to model progression of chronic disease; this type of model can handle disease recurrence and estimate long-term costs and life years gained/QALYs.
Steps in conducting a Markov model are:
- 1.
- Define states and allowable transitions
- 2.
- Choose a cycle length
- 3.
- Specify a set of transition probabilities between states
- 4.
- Assign a cost and utility to each health state
- 5.
- Identify the initial distribution of the population
- 6.
- Methods of evaluation
States: The states
should be specified as mutually exclusive (any individual
can be in only one state during each cycle)
and collectively exhaustive (every individual in the
initial cohort must be in a state during each cycle),
and they should adequately capture the benefits or
harms of any interventions transitions . Each state
is homogeneous—every individual in that state has
the same transition probabilities—implying that any
characteristics that determine those probabilities
must not differ within the state
Initial state vector:
Transition probabilities: from academic literatureInitial state vector:
Cycle length: The time horizon for the model should be sufficiently
large to capture all health effects and costs relevant
to the decision problem. Choice of cycle length should be based on the clinical
problem, remaining life expectancy, and computational
efficiency
State values (‘‘rewards’’),logical tests performed at the beginning of each cycle to determine the transitions,
Termination criteria.
TRANSITION PROBABILITIES AND RATES
Data retrieved from the academic literature is usually expressed in rates that may vary from 0 to indefinite (example: a mortality rate of 2% a year for disease X), whereas probabilities vary from 0 to 1 during a specific period of time. One important observation is the possible confusion in the use of the terms “rate” and “probability”.
Miller and Homan recommend that in some circumstances it may be best to estimate rates from the data and then transform these into probabilities of transition over a period of time.
To estimate rates in a multistate model, a cohort study recording all state transitions and sojourn times provides the ideal source of data.
Rate represents the transition in any given point in time, whereas probability is the proportion of the population at risk in a specific period of time. Therefore, probabilities available in the literature may not reflect the same period of time in the Markov cycle of the model in use.
It is then necessary to convert from transition rates to transition probabilities. It is common to use the formula p(t) = 1 − e−rt, where r is the rate and t is the cycle length (in this paper we refer to this as the “simple formula”). But this is incorrect for most models with two or more transitions, essentially because a person can experience more than one type of event in a single cycle. For example, they might go from healthy to ill and from ill to dead within a single cycle, or straight from healthy to dead. The simple formula is always wrong if there are competing risks (that is, if from one state there are two or more other states that a person can move to).
In discrete-time Markov chains, transitions are described in terms of probabilities, which represent the expected proportions that make the various transitions in each cycle or time-period.
Continuous variable, example the median time to an event, the transition probability is estimated via rates as outlined by Miller and Homan (1994).
For example, the pooled median progression-free survival time for the Y group is 22 weeks, therefore the transition probability can be calculated as:
INSERT FORMULA
pij(t) = 1 − e−qijt for i ≠ j
R P e - =1- = 0.090
where 0.091 (22 / 3)
ln[0.5)] = - R =
Binary data ( 0 - 1), that is the probability that an individual will transit from one state to another within a specified time period, the transition probability is calculated as follows.
For example, the pooled response rate for product Y is 34% over the 7 treatment cycles; therefore the transition probability of moving into the ‘respond’ health state in any cycle of the model is given by:
INSERT FORMULA
Markov Models
A first-order Markov model predicts that the state of an entity at a particular position in a sequence depends on the state of one entity at the preceding position (e.g. in various cis-regulatory elements in DNA and motifs in proteins). A second-order Markov model predicts that the state of an entity at a particular position in a sequence depends on the state of two entities at the two preceding positions (e.g. in codons in DNA). Similarly, a fifth-order Markov model predicts the state of the sixth entity in a sequence based on the previous five entities (e.g. in hexamers in coding sequence). It has been observed that the probability of occurrence of pairs of codons (hexamers) in a coding sequence is significantly higher than in noncoding sequence.
For most models with larger numbers of states, the formulas are extremely complicated and many researchers recommend the use of the a free statistical software R (http://www.r-project.org). (msm package) References:
MILLER DK,.Homan SM. Determining transition probabilities: Confusion and suggestions. Medical Decision Making 1994;14:52-8.
SIEBERT, U. et al. Recommendations of ISPOR-SMDM Joint Modelling Good Research Practices Task force: State-Transition Modelling. Medical Decision Making, 2012, 32:690-700.
http://mdm.sagepub.com/content/32/5/690.full.pdf+html

Nenhum comentário:
Postar um comentário