First off, I am new to machine learning, so these questions may be trivial.
Basically I am trying to tune an object with numeric knobs and numeric outputs. By doing a brute force tuning (permutations), there is a solution to find ideal values of the outputs, but it takes time. I'm trying to take a stab at using ML to at least shorten the process of tuning.
I have a dataset for a large number of good units that were tuned successfully, at the various number of attempts though.
For example: passing requirement: X = near 10, Y = near 5, Z = near 4
Object 1:
Try 1 => A = 1, B = 2, C = 3 ; X = 1, Y = 0, Z = 1 => not good
Try 2 => A = 1, B = 1, C = 1 ; X = 10, Y = 5, Z = 4 => good enough
Object 2:
Try 1 => A = 1.4, B = 2.6, C = 3.8 ; X = 10, Y = 5, Z = 3.9 => lucky!!!
Object 3:
...
Try 10 => A = 1.4, B = 2.6, C = 3.8 ; X = 10, Y = 5, Z = 3.9 => took a while!!!
I'm wondering how do I prepare the data for training and testing with this type of problem, as each object had a variable number of tries before it got successfully tuned. Should I just take the last successful combination for each object, and maintain the same columns (A, B, C, X, Y, Z). Or take them all (multiple rows per object)?
Or, for each object record, append another set of columns, such that per object, I have only one row. For example (A1,B1,C1,X1,Y1,Z1, A2,B2,C2,X2,Y2,Z2,... An,Bn,Cn,Xn,Yn,Zn)
As for the algorithm choice, I can only articulate that this isn't a classification problem (or a binary outcome). Something like regression, or decision tree, or hill climbing if there's such a thing?
