-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some doubt about DSConv. #1
Comments
or maybe it just a quantizer method? Just for the code, I thought it as a good Quantizer Method. |
In the released code I just used If you change that line to If you want to do the multiplications in int and then in FP32, you can do the conv using input*self.intweight and then multiply by self.alpha When I have time I will try to add a demo of that |
I also have several doubts:
|
Hi Brisker, Thanks for your interest in the method.
|
@MarceloGennari Besides, if you compare with other methods, I am not sure whether it is a fair comparison, given that basically quantizating conv layers |
|
Hi @brisker After your comment asking me to compare the method with the papers you pointed out, I am taking some time to review the training procedure for both activation and weights. Thanks for your input in here! I am now taking some time to update everything (hopefully the paper as well with all the new training and testing) and should update the code here as well. |
Hi Marcelo! |
I read your paper Yesterday. In your paper, you had said that: it is a variation of the Conv layer and can replaced the standard Conv. As the Fig.3 show, I think it does. But from the code in the github, I can't find the right forward demo, it just has the weight convert demo. To now, I have no idea to use test it, so will you provide a full test demo and test the readers to know how to use the intweight, alpha, KDSb, CDSw, CDSb, etc.
Hoping to your Reply.
The text was updated successfully, but these errors were encountered: