I am very much a beginner to neural networks using python and I have been really enjoying it so far.
I have tried to build a simple neural network, but my test predictions seem to be restricted and I was wondering if there is anything that screams out to any expert out there.
This is the general shape of my response on an overall basis:
After building a simply neural network:
[
layers.Dense(43, name="layer1", input_shape=[43]),
layers.Dense(16, activation="relu", name="layer2"),
layers.Dense(8, activation="relu", name="layer3"),
layers.Dense(1, activation="linear", name="layer4"),
]
)
model.compile(loss="mae")
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch%100==0: print('')
print('.',end='')
history = model.fit(
normed_train_data, train_labels,epochs=100,validation_split=0.2, verbose=0, callbacks=[PrintDot()])
the distribution of my test predictions look like this:
Now to me this feels like the network is lacking something quite trivial, maybe a combination of activation/optimizer but I am not sure what this is screaming.
My loss and validation loss (on mae basis) actually look OK i.e. not overfitting and validation loss is slightly above loss, however I would like it converge around 15-20 and not 33-34.
My data has 43 inputs, and I have standardised this data rather than normalizing.
..
If anything obvious jumps out to anyone please let me know
Thankyou!
question from:
https://stackoverflow.com/questions/65921877/tensorflow-neural-network-regression-problem 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…