Decision forest fitting is performed by creating an ensemble of decision trees. This approach capitalizes on the diversity of the trees within the ensemble to improve accuracy and reduce overfitting. The training data for each decision tree in the ensemble is generated by bootstrapping observations from the training data, meaning they all train on a different subset of the training data generated through random sampling with replacement. Additionally, in every tree, the best feature to split on at each internal node is determined by considereing a random subset of features rather than all available features.
The final prediction made by a random forest is obtained through majority voting across all the trees in the ensemble. This aggregation of predictions from numerous trees leads to a more stable and accurate model compared to that of any single decision tree.