-
Notifications
You must be signed in to change notification settings - Fork 170
PKG: Add package lnn #1719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
PKG: Add package lnn #1719
Changes from 16 commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
9a3f46b
Pass: Refactor code into a function
Shaikh-Ubaid e768338
Pass: Refactor: Check for nullptr earlier
Shaikh-Ubaid deed90e
Pass: Refactor code into function
Shaikh-Ubaid 7923055
PKG: Basic package ready
Shaikh-Ubaid 9696125
PKG: Use list inplace of numpy arrays
Shaikh-Ubaid c9a5727
PKG: Fix weights initialization
Shaikh-Ubaid 397124d
PKG: Use floating points
Shaikh-Ubaid 4f5d2bc
Support importing StructType
Shaikh-Ubaid b2d0853
TEST: Add package test
Shaikh-Ubaid 779ff70
ASR: Fix derived_type points outside symtab
Shaikh-Ubaid a507c7f
TEST: Add assert and finalize test_pkg_lnn.py
Shaikh-Ubaid 5265a01
lpdraw pkg: flip graph along y-axis
Shaikh-Ubaid ca9af51
TEST: Plot results in test_pkg_lnn.py
Shaikh-Ubaid f8fe296
TEST: Make test case work with both lpython, python
Shaikh-Ubaid 6b4bf07
TEST: Add another test
Shaikh-Ubaid 1feb3bb
TEST: Enable added test case
Shaikh-Ubaid fb669dd
PASS: Refactor: Rename to
Shaikh-Ubaid File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Empty file.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
from .perceptron_main import init_perceptron, train_dataset, test_perceptron, normalize_input_vectors, print_perceptron, Perceptron |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,127 @@ | ||
from lpython import dataclass, i32, f64 | ||
from sys import exit | ||
|
||
@dataclass | ||
class Perceptron: | ||
no_of_inputs: i32 | ||
weights: list[f64] | ||
learn_rate: f64 | ||
iterations_limit: i32 | ||
des_accuracy: f64 | ||
cur_accuracy: f64 | ||
epochs_cnt: i32 | ||
|
||
def normalize(value: f64, leftMin: f64, leftMax: f64, rightMin: f64, rightMax: f64) -> f64: | ||
# Figure out how 'wide' each range is | ||
leftSpan: f64 = leftMax - leftMin | ||
rightSpan: f64 = rightMax - rightMin | ||
|
||
# Convert the left range into a 0-1 range (float) | ||
valueScaled: f64 = (value - leftMin) / leftSpan | ||
|
||
# Convert the 0-1 range into a value in the right range. | ||
return rightMin + (valueScaled * rightSpan) | ||
|
||
def normalize_input_vectors(input_vectors: list[list[f64]]): | ||
rows: i32 = len(input_vectors) | ||
cols: i32 = len(input_vectors[0]) | ||
|
||
j: i32 | ||
for j in range(cols): | ||
colMinVal: f64 = input_vectors[0][j] | ||
colMaxVal: f64 = input_vectors[0][j] | ||
i: i32 | ||
for i in range(rows): | ||
if input_vectors[i][j] > colMaxVal: | ||
colMaxVal = input_vectors[i][j] | ||
if input_vectors[i][j] < colMinVal: | ||
colMinVal = input_vectors[i][j] | ||
|
||
for i in range(rows): | ||
input_vectors[i][j] = normalize(input_vectors[i][j], colMinVal, colMaxVal, -1.0, 1.0) | ||
|
||
|
||
|
||
def get_inp_vec_with_bias(a: list[f64]) -> list[f64]: | ||
b: list[f64] = [] | ||
i: i32 | ||
for i in range(len(a)): | ||
b.append(a[i]) | ||
b.append(1.0) | ||
return b | ||
|
||
def init_weights(size: i32) -> list[f64]: | ||
weights: list[f64] = [] | ||
i: i32 | ||
for i in range(size): | ||
weights.append(0.0) | ||
weights.append(0.0) # append bias | ||
return weights | ||
|
||
def init_perceptron(p: Perceptron, n: i32, rate: f64, iterations_limit: i32, des_accuracy: f64): | ||
if (n < 1 or n > 1000): | ||
print("no_of_inputs must be between [1, 1000]") | ||
exit(1) | ||
p.no_of_inputs = n | ||
p.weights = init_weights(n) | ||
p.learn_rate = rate | ||
p.iterations_limit = iterations_limit | ||
p.des_accuracy = des_accuracy | ||
p.cur_accuracy = 0.0 | ||
p.epochs_cnt = 0 | ||
|
||
def train_perceptron(p: Perceptron, input_vector: list[f64], actual_output: i32): | ||
predicted_output: i32 = predict_perceptron(p, input_vector) | ||
error: i32 = actual_output - predicted_output | ||
i: i32 | ||
for i in range(len(input_vector)): | ||
p.weights[i] += p.learn_rate * f64(error) * f64(input_vector[i]) | ||
|
||
def predict_perceptron(p: Perceptron, input_vector: list[f64]) -> i32: | ||
weighted_sum: f64 = 0.0 | ||
i: i32 = 0 | ||
for i in range(len(input_vector)): | ||
weighted_sum = weighted_sum + p.weights[i] * f64(input_vector[i]) | ||
return activation_function(weighted_sum) | ||
|
||
def activation_function(value: f64) -> i32: | ||
if value >= 0.0: | ||
return 1 | ||
return -1 | ||
|
||
def train_epoch(p: Perceptron, input_vectors: list[list[f64]], outputs: list[i32]): | ||
i: i32 | ||
for i in range(len(input_vectors)): | ||
input_vector: list[f64] = get_inp_vec_with_bias(input_vectors[i]) | ||
if predict_perceptron(p, input_vector) != outputs[i]: | ||
train_perceptron(p, input_vector, outputs[i]) | ||
|
||
def train_dataset(p: Perceptron, input_vectors: list[list[f64]], outputs: list[i32]): | ||
p.cur_accuracy = 0.0 | ||
p.epochs_cnt = 0 | ||
while p.cur_accuracy < p.des_accuracy and p.epochs_cnt < p.iterations_limit: | ||
p.epochs_cnt += 1 | ||
train_epoch(p, input_vectors, outputs) | ||
p.cur_accuracy = test_perceptron(p, input_vectors, outputs) | ||
|
||
def test_perceptron(p: Perceptron, input_vectors: list[list[f64]], outputs: list[i32]) -> f64: | ||
correctly_classified_cnt: i32 = 0 | ||
i: i32 | ||
for i in range(len(input_vectors)): | ||
input_vector: list[f64] = get_inp_vec_with_bias(input_vectors[i]) | ||
if predict_perceptron(p, input_vector) == outputs[i]: | ||
correctly_classified_cnt += 1 | ||
return (correctly_classified_cnt / len(input_vectors)) * 100.0 | ||
|
||
def print_perceptron(p: Perceptron): | ||
print("weights = [", end = "") | ||
i: i32 | ||
for i in range(p.no_of_inputs): | ||
print(p.weights[i], end = ", ") | ||
print(p.weights[p.no_of_inputs], end = "(bias)]\n") | ||
print("learn_rate = ", end = "") | ||
print(p.learn_rate) | ||
print("accuracy = ", end = "") | ||
print(p.cur_accuracy) | ||
print("epochs_cnt = ", end = "") | ||
print(p.epochs_cnt) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,89 @@ | ||
from lnn.perceptron import init_perceptron, print_perceptron, normalize_input_vectors, Perceptron, train_dataset | ||
from lpdraw import Line, Circle, Display, Clear | ||
from lpython import i32, f64, Const | ||
from numpy import empty, int32 | ||
|
||
|
||
def compute_decision_boundary(p: Perceptron, x: f64) -> f64: | ||
bias: f64 = p.weights[-1] | ||
slope: f64 = (-p.weights[0] / p.weights[1]) | ||
intercept: f64 = (-bias / p.weights[1]) | ||
return slope * x + intercept | ||
|
||
def plot_graph(p: Perceptron, input_vectors: list[list[f64]], outputs: list[i32]): | ||
Width: Const[i32] = 500 # x-axis limits [0, 499] | ||
Height: Const[i32] = 500 # y-axis limits [0, 499] | ||
Screen: i32[Height, Width] = empty((Height, Width), dtype=int32) | ||
Clear(Height, Width, Screen) | ||
|
||
x1: f64 = 2.0 | ||
y1: f64 = compute_decision_boundary(p, x1) | ||
x2: f64 = -2.0 | ||
y2: f64 = compute_decision_boundary(p, x2) | ||
|
||
# center the graph using the following offset | ||
scale_offset: f64 = Width / 4 | ||
shift_offset: f64 = Width / 2 | ||
x1 *= scale_offset | ||
y1 *= scale_offset | ||
x2 *= scale_offset | ||
y2 *= scale_offset | ||
|
||
# print (x1, y1, x2, y2) | ||
Line(Height, Width, Screen, i32(x1 + shift_offset), i32(y1 + shift_offset), i32(x2 + shift_offset), i32(y2 + shift_offset)) | ||
|
||
i: i32 | ||
point_size: i32 = 5 | ||
for i in range(len(input_vectors)): | ||
input_vectors[i][0] *= scale_offset | ||
input_vectors[i][1] *= scale_offset | ||
input_vectors[i][0] += shift_offset | ||
input_vectors[i][1] += shift_offset | ||
if outputs[i] == 1: | ||
x: i32 = i32(input_vectors[i][0]) | ||
y: i32 = i32(input_vectors[i][1]) | ||
Line(Height, Width, Screen, x - point_size, y, x + point_size, y) | ||
Line(Height, Width, Screen, x, y - point_size, x, y + point_size) | ||
else: | ||
Circle(Height, Width, Screen, i32(input_vectors[i][0]), i32(input_vectors[i][1]), f64(point_size)) | ||
|
||
Display(Height, Width, Screen) | ||
|
||
def main0(): | ||
p: Perceptron = Perceptron(0, [0.0], 0.0, 0, 0.0, 0.0, 0) | ||
init_perceptron(p, 2, 0.05, 10000, 90.0) | ||
print_perceptron(p) | ||
print("=================================") | ||
|
||
input_vectors: list[list[f64]] = [[-1.0, -1.0], [-1.0, 1.0], [1.0, -1.0], [1.0, 1.0]] | ||
outputs: list[i32] = [1, 1, 1, -1] | ||
|
||
normalize_input_vectors(input_vectors) | ||
train_dataset(p, input_vectors, outputs) | ||
print_perceptron(p) | ||
|
||
assert p.cur_accuracy > 50.0 | ||
assert p.epochs_cnt > 1 | ||
|
||
plot_graph(p, input_vectors, outputs) | ||
|
||
def main1(): | ||
p: Perceptron = Perceptron(0, [0.0], 0.0, 0, 0.0, 0.0, 0) | ||
init_perceptron(p, 2, 0.05, 10000, 90.0) | ||
print_perceptron(p) | ||
print("=================================") | ||
|
||
input_vectors: list[list[f64]] = [[-1.0, -1.0], [-1.0, 1.0], [1.0, -1.0], [1.0, 1.0], [1.5, 1.0]] | ||
outputs: list[i32] = [1, 1, -1, 1, -1] | ||
|
||
normalize_input_vectors(input_vectors) | ||
train_dataset(p, input_vectors, outputs) | ||
print_perceptron(p) | ||
|
||
assert p.cur_accuracy > 50.0 | ||
assert p.epochs_cnt > 1 | ||
|
||
plot_graph(p, input_vectors, outputs) | ||
|
||
main0() | ||
main1() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this for #1721?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a refactoring of code into a function. I named the function as
check_and_update_args_for_pass_arr_by_data_passed_as_callback
based on the operation that I thought the code was performing. I think it improves the code readibility. I think it is not towards any specific issue.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just renamed it to
update_args_for_pass_arr_by_data_funcs_passed_as_callback()
.