Regarding differences in raw AICc values generated by MuMIN and model_performance function from performance package #893
Replies: 3 comments 8 replies
-
|
Can you provide a reproducible example? |
Beta Was this translation helpful? Give feedback.
-
|
If I am not wrong, this could also be because of log transformed response variable. In fact, model_performance warns about the inaccuracy in handling log-transformed response (even after REML = FALSE). In such a case, wouldn't it be risky to use compare_performance for model comparison?. |
Beta Was this translation helpful? Give feedback.
-
|
Here is a snippet of a working example. Firstly, I kept all REML = FALSE before model comparison. Although model_performance asks to set estimator = ML, in such a case, it might not be required, as REML = FALSE alternatively means the estimator = ML. Further, the warning "....this ignores REML = TRUE looks unnecessary when REML = FALSE. Then, in such a case (when REML = FALSE), should I consider that the log-likelihood attempted by model_permornance is accurate and reliable? If that is so, would it be fair to say that the results of the |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear authors, I observed several important differences when calculating raw AICc values using the MuMIn and performance packages. First, I would like to set the context. While comparing the performance of my models, I received an output with absolute confidence in one model (AICc weight = 1.0), and others had a value of 0.0. I was therefore interested to know the raw AICc values, which I tried to derive from the MuMIN package, and to my surprise, the same model with an AICc weight of 1.0 did not have the lowest AICc value with MuMIN; in fact, it was one of the mid-range values (raw AICc = 2312.7498). I was then motivated to check the same with the model_performance function from the performance package, and it came out to be -3814.8, which is way too different from the earlier value. Is it because the model_performance function is using a different formula for calculating AICc values than MuMIN? Interestingly, the models with the lowest AICc values in MuMIN had identical values to model_performance, which is quite striking.
models that I compared
Beta Was this translation helpful? Give feedback.
All reactions