Memory Requirement Reduction

Updated at 2017-11-13 16:31

Saving RAM during training is important when working with large models. One big problem in reinforced learning is that you don't know which features are important in advance so it can bloats your features if you just take everything.

Encode weights and biases with fewer bits. Using 32 bit floating point numbers is usually an overkill. Having two bits for left of the decimal point, one bit for sign and rest bits for right of the decimal point.

Use sparse matrices. When you are working with billions of features, don't send billions of zeros back and forth but use an encoding that can handle this. Your libraries will handle this on machine learning level though.

Reinforced Learning Specific

Use Poisson inclusion in feature selection. When we encounter a feature that is not already in our model, we only add the feature to the model with probability p.

Use Bloom filter inclusion in feature selection. Use Bloom filters to detect first n times a feature is encountered in training, we only add the feature to the model once it has been occurred more than n times.