Title: | Ising Model of Explanatory Coherence |
Version: | 0.2.0 |
Description: | Theories are one of the most important tools of science. Although psychologists discussed problems of theory in their discipline for a long time, weak theories are still widespread in most subfields. One possible reason for this is that psychologists lack the tools to systematically assess the quality of their theories. Previously a computational model for formal theory evaluation based on the concept of explanatory coherence was developed (Thagard, 1989, <doi:10.1017/S0140525X00057046>). However, there are possible improvements to this model and it is not available in software that psychologists typically use. Therefore, a new implementation of explanatory coherence based on the Ising model is available in this R-package. |
License: | MIT + file LICENSE |
Encoding: | UTF-8 |
LazyData: | true |
RoxygenNote: | 7.1.1 |
Imports: | IsingSampler, igraph, qgraph |
Suggests: | testthat |
NeedsCompilation: | no |
Packaged: | 2020-11-25 16:13:47 UTC; Maximilian Maier |
Author: | Maximilian Maier [aut, cre], Noah van Dongen [ths], Denny Borsboom [ths] |
Maintainer: | Maximilian Maier <maximilianmaier0401@gmail.com> |
Repository: | CRAN |
Date/Publication: | 2020-11-27 10:10:03 UTC |
IMEC
Description
This package computes the Ising Model of Explanatory Coherence for theory comparison and theory appraisal.
Construct Explanary Network
intializeNetwork constructs an initial empty explanatoy network Explain and Contradict specify explanatory relations.
Calculate IMEC
computeIMEC computes the Ising model of explanatory coherence and returns an object of class IMEC. Use summary to summarize the result and plot to plot the explanatory relations.
Computes the Ising model of explanatory coherence.
Description
Computes IMEC based on previously specified explanatory relations.
Usage
computeIMEC(
matrix,
evidence,
phenomena,
theory1,
theory2 = character(),
analytic = TRUE,
analogy = numeric()
)
Arguments
matrix |
matrix of explanatory relations. |
evidence |
vector of evidence for phenomena. |
phenomena |
vector of phenomena should be the same length as evidence. |
theory1 |
vector of propositions in theory1. |
theory2 |
vector of propositions in theory2. |
analytic |
whether the result should be calculated analytically or (for large networks) estimated using Metropolis-Hastings algorithm enhanced with Coupling from the past. |
analogy |
this argument is only for purposes of adding analogy in the future and should currently not be used. |
Value
returns an IMEC object which contains the explanatory coherence of the propositions, the explanatory relations, the evidence, and the phenomena
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)
contradict
Description
Sets a contradictory relation between a set of propositions and a phenomenon. If more than one proposition is used the edge weight will be reduced accordingly.
Usage
contradict(Explanation, Explanandum, matrix, weight = 4)
Arguments
Explanation |
Vector of explanations that explain the explanadum |
Explanandum |
A proposition or phenomenon that is explained |
matrix |
Matrix of explanatory relations that is modified |
weight |
Strength of connection (i.e., strength of contradiction) #'@return returns the explanatory matrix with the edge weights modified according to the specified contradiction |
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)
explain
Description
Sets an explanatory relation between a set of propositions and a phenomenon. If more than one proposition is used the edge weight will be reduced accordingly.
Usage
explain(Explanation, Explanandum, matrix, weight = 1)
Arguments
Explanation |
Vector of Explanations that explain the Explanadum |
Explanandum |
A proposition or phenomenon that is explained |
matrix |
Matrix of Explanatory relations that is modified |
weight |
Strength of connection (i.e., quality of explanation) |
Value
Returns the explanatory matrix with the edge weights modified according to the specified explanation
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)
Initialize the explanatory network
Description
This function initializes the network in which explanatory relations can be stored later.
Usage
initializeNetwork(phenomena, theory1, theory2 = character())
Arguments
phenomena |
Vector of phenomena that are explained |
theory1 |
Vector of propositions included in theory 1 |
theory2 |
Vector of propositions included in theory 2 (only set manually if theory comparison is intended) |
Value
An empty edge matrix (all edges 0)
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)
Plots the explanatory relations
Description
Plot the explanatory relations between data and phenomena. A window will open where you can drag the nodes in the intended position. Then press enter to plot the network.
Usage
## S3 method for class 'IMEC'
plot(x, nodesize = 10, ...)
Arguments
x |
Object of the class IMEC as returned by computeIMEC |
nodesize |
size of vertices in the plotted network |
... |
other parameters passed on to S3 method. |
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)
Summary of an IMEC object.
Description
Summary of an IMEC object.
Usage
## S3 method for class 'IMEC'
summary(object, ...)
Arguments
object |
IMEC object. |
... |
other parameters passed on from S3 method. |
Examples
# simple example comparing two hypotheses one of them with more explanatory breadth##
T1 <- c("H1", "H2")
Phenomena <- c("E1", "E2")
Thresholds <- c(2,2)
explanations <- initializeNetwork(Phenomena, T1)
explanations <- explain("H1", "E1", explanations)
explanations <- explain("H1", "E2", explanations)
explanations <- explain("H2", "E2", explanations)
explanations <- contradict("H1", "H2", explanations)
coherence <- computeIMEC(explanations, Thresholds, Phenomena, T1)
summary(coherence)
plot(coherence)