-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added AlexNet implementation for Extended MNIST #19
base: master
Are you sure you want to change the base?
Conversation
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. ℹ️ Googlers: Go here for more info. |
@Craigacp @karllessard - Please have a look. |
@googlebot I fixed it. |
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. ℹ️ Googlers: Go here for more info. |
@akshaybahadur21 Great contribution! Need time to check architecture and it could be merged |
Looks like the commit uses a different email address than the one you've got registered in Github, which is why it's complaining about the CLA. Could you fix it before we go any further? |
@Craigacp - let me make the necessary changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Craigacp @karllessard invite you to discussion about datasets
public static final int BATCH_SIZE = 500; | ||
|
||
// Fashion MNIST dataset paths | ||
public static final String TRAINING_IMAGES_ARCHIVE = "emnist/emnist-letters-train-images-idx3-ubyte.gz"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have no strong position on new dataset addition. But I suppose the best cases here - to add a link on dataset and its creators (for example on paper https://arxiv.org/abs/1702.05373)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we keep training on Mnist dataset?
//Layer 5 | ||
Relu<TFloat32> relu5 = alexNetConv2DLayer("2", tf, relu4, new int[]{3, 3, 384, 256}, 256); | ||
MaxPool<TFloat32> pool5 = alexNetMaxPool(tf, relu5); | ||
LocalResponseNormalization<TFloat32> norm5 = alexNetModelLRN(tf, pool5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dear @akshaybahadur21 could you please explain, why are you using LRN here (please share the link on AlexNet references)?
SoftmaxCrossEntropyWithLogits<TFloat32> batchLoss = tf.nn.raw | ||
.softmaxCrossEntropyWithLogits(logits, oneHot); | ||
Mean<TFloat32> labelLoss = tf.math.mean(batchLoss.loss(), tf.constant(0)); | ||
Add<TFloat32> regularizers = tf.math.add(tf.nn.l2Loss(fc1Weights), tf.math |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's think, do we need regularization here, maybe better to remove it and refactor dense layers in separate procedures
Related to issue #11