This repository was archived by the owner on Jun 14, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 28
Design proposals/ignored variables #138
Open
tdiethe
wants to merge
4
commits into
amzn:develop
Choose a base branch
from
tdiethe:design_proposals/ignored_variables
base: develop
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,93 @@ | ||
| # Adding the ability to ignore certain variables during inference | ||
|
|
||
| Tom Diethe (2018-11-26) | ||
|
|
||
| ## Motivation | ||
|
|
||
| In the current design, variables are either latent or observed. By default, when creating posteriors, any variables that are not observed are then included in the model, which in certain situations can have undesired effects. | ||
|
|
||
| Consider the following model depicted as a directed factor graph: | ||
|
|
||
|  | ||
|
|
||
| Here we have a multi-head (two heads in this case) Bayesian neural network model. For simplicity this is drawn as having one-dimensional inputs and outputs and hence all variables in the graph are also one-dimensional. | ||
|
|
||
| Note that the model assumes here that there are different "prediction tasks", but that the input variable is shared between the tasks. To be concrete, at any particular episode, we only observe data for one of these tasks, e.g. for the first task this will be a tuple of the form `(x, y0)` where these are arrays of `N` data points, and for the second task this will be `(x ,y1)`. When performing inference over the latent variables in either of these scenarios, we will have no observations for the variables in the complement - i.e. in this setting for the first task we will wish to ignore the variables `r1, y1` completely. Note that they have no impact on the joint distribution. | ||
|
|
||
| Including the variable `N` for the number of data points there are `14 + 8 * h` variables in the model (where `h` is the number of heads), and we wish to have `12 + 6 * h` variables in the posterior, since we will have variables for each of the weights, biases, and their respective `µ` and `σ` variables. | ||
|
|
||
| (Here note in fact the variables that are present for the means and standard variances of the NN weights are there to allow online/transfer learning by using the mean-field posteriors as priors, and will not be optimized during inference. This currently is achieved by setting the `_grad_req = 'null'` for these parameters.) | ||
|
|
||
| The current design of the interface for creating a posterior distribution does not allow for variables to be ignored in this manner. For example, if we use the Gaussian mean-field posterior, the logic is as follows: | ||
|
|
||
| ```python | ||
| def create_Gaussian_meanfield(model, observed, dtype=None): | ||
| dtype = get_default_dtype() if dtype is None else dtype | ||
| observed = variables_to_UUID(observed) | ||
| q = Posterior(model) | ||
| for v in model.variables.values(): | ||
| if v.type == VariableType.RANDVAR and v not in observed: | ||
| mean = Variable(shape=v.shape) | ||
| variance = Variable(shape=v.shape, | ||
| transformation=PositiveTransformation()) | ||
| q[v].set_prior(Normal(mean=mean, variance=variance, dtype=dtype)) | ||
| return q | ||
| ``` | ||
|
|
||
| For the model depicted above, we have the following pattern of observations: | ||
|
|
||
| ```python | ||
| observed = [model.x, model.y0] | ||
| ``` | ||
|
|
||
| and we end up with an additional variable for the head not being used (e.g. `y1` when observing `y0`). This then causes issues in the subsequent inference. | ||
|
|
||
|
|
||
| ## Proposed Changes | ||
|
|
||
| One way to solve this is to have a to pass in a list of variables that are to be ignored when creating the posteriors, and then another list of variables. | ||
| As an example, for the Gaussian mean-field posterior example we could do the following: | ||
|
|
||
| ```python | ||
| def create_Gaussian_meanfield(model, observed, ignored=None, dtype=None): | ||
| dtype = get_default_dtype() if dtype is None else dtype | ||
| observed = variables_to_UUID(observed) | ||
| ignored = variables_to_UUID(ignored) if ignored is not None else [] | ||
| q = Posterior(model) | ||
| for v in model.variables.values(): | ||
| if v.type == VariableType.RANDVAR and v not in observed and v not in ignored: | ||
| mean = Variable(shape=v.shape) | ||
| variance = Variable(shape=v.shape, | ||
| transformation=PositiveTransformation()) | ||
| q[v].set_prior(Normal(mean=mean, variance=variance, dtype=dtype)) | ||
| return q | ||
| ``` | ||
|
|
||
| We would then specify: | ||
|
|
||
| ```python | ||
| observed = [model.x, model.y0] | ||
| ignored = [model.y1] | ||
| ``` | ||
|
|
||
| Similarly, for the inference, would augment the keyword arguments to include these variables. i.e.: | ||
|
|
||
| ```python | ||
| kwargs = dict(x=x, y0=y, ignored=[model.y1, model.r1]) | ||
| inference.run(max_iter=max_iter, learning_rate=learning_rate, verbose=False, callback=print_status, **kwargs) | ||
| ``` | ||
|
|
||
| Note here that we have additionally specified that the parent of `y1`, `r1`, should also be ignored. In terms of the Machine Learning algorithm, this would then be estimating the posterior: | ||
|
|
||
| ``` | ||
| p(Θ, r0, r1 | x, y0, y1) = p(Θ, r0 | x, y0) ∝ p(x, y0 , r0| Θ) p(Θ) | ||
| ``` | ||
|
|
||
| where `Θ` denotes all of the weights and biases collected together. | ||
|
|
||
| ## Rejected Alternatives | ||
|
|
||
| A possible solution is to have a flag set on the variable itself (default to `False`) saying that this variable should be ignored. | ||
| Drawbacks of this approach: | ||
| - need to keep track of the flag in multiple places | ||
| - if we want to "unset" the flag, the inference algorithms would also need to be told about the un-setting (e.g. if warm-starting inference) | ||
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason you can't pass a "targets" list, instead of "ignored"? I think we could then use those targets to only compute log_pdf / draw samples for those variables (and their associated dependencies).
Also it's not really an issue that we define mean-field for all variables in the posterior definition, as long as we ignore unnecessary parts during the computation. This I feel less strongly about as it doesn't change the main API, only the create_gaussian_meanfield helper function.
So in the above, it would look something like:
And the inference algorithm would figure out the required subgraph to compute those targets automatically. (Just traversing up the FactorGraph.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean "targets" here in the sense of classification targets, or inference targets? If it's the former, then I think that's pretty specific to the supervised learning setting. If it's the latter, then actually I think this is quite reasonable in many cases - to be able to specify which variables are observed and which variables you're interested in, and only perform computations on the subgraph that these represent. Note that this is the model that Infer.NET uses (set observed variables, declare which variables you are interested in marginal posteriors for. Of course, sometimes you are truly interested in the full posterior (e.g. when performing MCMC) and this makes less sense.