Skip to content

Commit e89f0ff

Browse files
authored
update llama2, rwkv, MoE courses (#52)
1 parent bfb3354 commit e89f0ff

File tree

12 files changed

+22
-0
lines changed

12 files changed

+22
-0
lines changed

.gitmodules

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[submodule "Season2.step_into_llm/08.MoE/mixtral-mindspore"]
2+
path = Season2.step_into_llm/08.MoE/mixtral-mindspore
3+
url = https://github.com/lvyufeng/mistral-mindspore
5.23 MB
Binary file not shown.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
from mindnlp.transformers import AutoModelForCausalLM, AutoTokenizer
2+
tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-2b")
3+
model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-2b")
4+
result = model.generate({"input": "今天天气不错,", "<ans>": ""}, tokenizer)
5+
print(result)
3.65 MB
Binary file not shown.
+13
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
import mindspore
2+
from mindnlp.transformers import RwkvForCausalLM, AutoTokenizer
3+
4+
expected_output = "Hello my name is Jasmine and I am a newbie to the"
5+
6+
model_id = "RWKV/rwkv-4-169m-pile"
7+
tokenizer = AutoTokenizer.from_pretrained(model_id)
8+
input_ids = tokenizer("Hello my name is", return_tensors="ms").input_ids
9+
model = RwkvForCausalLM.from_pretrained(model_id, ms_dtype=mindspore.float16)
10+
11+
output = model.generate(input_ids, max_new_tokens=10)
12+
output_sentence = tokenizer.decode(output[0].tolist())
13+
print(output_sentence, expected_output)

Season2.step_into_llm/08.MoE/MoE.pdf

4.6 MB
Binary file not shown.

Season2.step_into_llm/09.Prompt Engineering/__init__.py

Whitespace-only changes.

Season2.step_into_llm/10.Quantization/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)