Questions about Machine Learning in Appsheet Prediction

Hello!

May I ask some questions regarding Machine Learning in the Appsheet Prediction:

1) For the result from Classification (Target col = Yes/No), I have the following result:

segcep_0-1656409182826.png

a) Is the 1st statement (i.e. Your model correctly identified 99% of the rows where Churn is true.) is actually  “Recall” value in Machine Learning? Or is it actually “Accuracy”?

b) How about the 2nd statement (i.e. When your model predicts Churn to be true, it is correct 98% of the time), is this “Precision” in Machine Learning?

2) For Classification (Target col = Yes/No), in the case of imbalance data in the Target column (e.g. 80% is Yes, and 20% is No), will the predictive model in Appsheet do some balancing method to tackle this issue?

The result shown in Question 1 is run with about 5600 rows of data, with 83% is Yes and 17% is No for the Target column. I found the result is too good, which make me wonder if it’s really the case, or is it because of the imbalance data issue.

3) If Appsheet models do not handle the imbalance data, any suggestion how to tackle the imbalance data manually before inputting the data into Appsheet, if we really don’t have sufficient data for the minority case?

4) For Regression result as follows:

segcep_1-1656409182830.png

Can I check whether 776.67 is actually the value Mean Absolute Error (MAE)?

5) Does the models in Appsheet remove / tackle the outliers during the training?

Thank you so much in advance! 😊

 

0 1 204
1 REPLY 1
Top Labels in this Space