| SGD | → | 可調lr (learning Rate)、momentum、decay (學習率衰減)、Nesterov (True or False) | |
| RMSprop | → | 可調lr (learning Rate),其他默認 | |
| Adagrad、Adadelta、Nadam | → | 不要調任何參數 (建議全默認) |
| units | : | 節點數量 (Node數) |
| input_dim | : | 輸入資料的維度 (不含active的column數目) |
| activation | : | 該層的激活函式 |
| bias | : | 是否包含偏執向量,是布林值 |
| output_dim | : | 輸出資料的維度 |
keras.layers.core.Dense(input_dim, init='glorot_uniform', activation='linear', weights=None, W_regularizer=None, b_regularizer=None, activity_regularizer=None, W_constraint=None, b_constraint=None, bias=True, output_dim=None)
| Keras Freeze Layers |
| Keras Dropout Layer |
| Keras Flatten Layer |
| Keras Masking Layer |
| Keras Permute Layer |
| Keras Repeat Layer |
| Keras Reshape Layer (output layer) |
| Keras Embedding Layer |
| Keras LSTM Layer |
| Keras GRU Layer |
| Keras Simple RNN Layer |
| 以下實例code |
| 一層input layer、三層hidden layer、一層output layer |
model = Sequential()
model.add(Dense(512, input_dim=1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(256, input_dim=1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(128, input_dim=1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))