Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added AlexNet implementation for Extended MNIST #19

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

akshaybahadur21
Copy link

Related to issue #11

@google-cla
Copy link

google-cla bot commented Nov 22, 2020

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@google-cla google-cla bot added the cla: no label Nov 22, 2020
@akshaybahadur21
Copy link
Author

@Craigacp @karllessard - Please have a look.

@akshaybahadur21
Copy link
Author

@googlebot I fixed it.

@google-cla
Copy link

google-cla bot commented Nov 22, 2020

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@zaleslaw
Copy link
Contributor

@akshaybahadur21 Great contribution! Need time to check architecture and it could be merged

@Craigacp
Copy link
Collaborator

Looks like the commit uses a different email address than the one you've got registered in Github, which is why it's complaining about the CLA. Could you fix it before we go any further?

@akshaybahadur21
Copy link
Author

@Craigacp - let me make the necessary changes

Copy link
Contributor

@zaleslaw zaleslaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Craigacp @karllessard invite you to discussion about datasets

public static final int BATCH_SIZE = 500;

// Fashion MNIST dataset paths
public static final String TRAINING_IMAGES_ARCHIVE = "emnist/emnist-letters-train-images-idx3-ubyte.gz";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have no strong position on new dataset addition. But I suppose the best cases here - to add a link on dataset and its creators (for example on paper https://arxiv.org/abs/1702.05373)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we keep training on Mnist dataset?

//Layer 5
Relu<TFloat32> relu5 = alexNetConv2DLayer("2", tf, relu4, new int[]{3, 3, 384, 256}, 256);
MaxPool<TFloat32> pool5 = alexNetMaxPool(tf, relu5);
LocalResponseNormalization<TFloat32> norm5 = alexNetModelLRN(tf, pool5);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dear @akshaybahadur21 could you please explain, why are you using LRN here (please share the link on AlexNet references)?

SoftmaxCrossEntropyWithLogits<TFloat32> batchLoss = tf.nn.raw
.softmaxCrossEntropyWithLogits(logits, oneHot);
Mean<TFloat32> labelLoss = tf.math.mean(batchLoss.loss(), tf.constant(0));
Add<TFloat32> regularizers = tf.math.add(tf.nn.l2Loss(fc1Weights), tf.math
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's think, do we need regularization here, maybe better to remove it and refactor dense layers in separate procedures

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants