-
Notifications
You must be signed in to change notification settings - Fork 289
Add Qwen3 Moe #2260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add Qwen3 Moe #2260
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Took an initial pass. Let's try to clean up the config and state passing.
No passing an index down the layer stack, plus data structures that apply to the whole layer stack.
self, | ||
num_query_heads, | ||
num_key_value_heads, | ||
layer_index, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This layer index is gross, let's remove it. Handle the args properly in the backbone and pass the correct sliding_window_size
to this layer and the decoder layer above it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since it's an Moe, layer index is not just used for sliding window but for experts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I replaced this passing of layer_index
, decoder_sparse_step
and mlp_only_layers
with a single boolean switch:
model(input_data) | ||
""" | ||
|
||
def __init__( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, let's make sure we prune this list down just to the config options we need.
sliding_window_size=32768, | ||
output_router_logits=False, | ||
router_aux_loss_coefficient=0.001, | ||
mlp_only_layers=[], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine to have something like this for the toplevel, but let's pass something more direct to each decoder layer (so we don't need to pass the index down). Make sure to document if we keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but let's pass something more direct to each decoder layer
what do you suggest?
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the Qwen3 MoE model. The implementation looks solid, covering the backbone, attention, decoder, tokenizer, and conversion scripts. I've identified several high-severity issues related to incomplete get_config
methods in various new layers, which will prevent model serialization from working correctly. There are also some medium-severity issues like unused parameters and a critical issue in the checkpoint conversion test script where an incorrect preprocessor is used. I've provided suggestions to fix these issues. Once addressed, the PR should be in great shape.
keras_hub_preprocessor = keras_hub.models.QwenCausalLMPreprocessor( | ||
keras_hub_tokenizer | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test is using keras_hub.models.QwenCausalLMPreprocessor
, which is for the Qwen2 model. This test should use the newly added Qwen3MoeCausalLMPreprocessor
to correctly test the Qwen3 MoE model components.
keras_hub_preprocessor = keras_hub.models.Qwen3MoeCausalLMPreprocessor(
keras_hub_tokenizer
)
keras_hub_preprocessor = keras_hub.models.QwenCausalLMPreprocessor( | ||
keras_hub_tokenizer | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def get_config(self): | ||
config = super().get_config() | ||
config.update( | ||
{ | ||
"num_query_heads": self.num_query_heads, | ||
"num_key_value_heads": self.num_key_value_heads, | ||
"rope_max_wavelength": self.rope_max_wavelength, | ||
"rope_scaling_factor": self.rope_scaling_factor, | ||
"kernel_initializer": keras.initializers.serialize( | ||
self.kernel_initializer | ||
), | ||
"dropout": self.dropout, | ||
"sliding_window_size": self.sliding_window_size, | ||
} | ||
) | ||
return config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The get_config
method is missing several parameters that are defined in __init__
and used in the layer: layer_index
, head_dim
, and layer_norm_epsilon
. Without these, the layer cannot be correctly serialized and deserialized, which will break model saving and loading.
def get_config(self): | |
config = super().get_config() | |
config.update( | |
{ | |
"num_query_heads": self.num_query_heads, | |
"num_key_value_heads": self.num_key_value_heads, | |
"rope_max_wavelength": self.rope_max_wavelength, | |
"rope_scaling_factor": self.rope_scaling_factor, | |
"kernel_initializer": keras.initializers.serialize( | |
self.kernel_initializer | |
), | |
"dropout": self.dropout, | |
"sliding_window_size": self.sliding_window_size, | |
} | |
) | |
return config | |
def get_config(self): | |
config = super().get_config() | |
config.update( | |
{ | |
"num_query_heads": self.num_query_heads, | |
"num_key_value_heads": self.num_key_value_heads, | |
"rope_max_wavelength": self.rope_max_wavelength, | |
"rope_scaling_factor": self.rope_scaling_factor, | |
"kernel_initializer": keras.initializers.serialize( | |
self.kernel_initializer | |
), | |
"dropout": self.dropout, | |
"sliding_window_size": self.sliding_window_size, | |
"layer_index": self.layer_index, | |
"head_dim": self.head_dim, | |
"layer_norm_epsilon": self.layer_norm_epsilon, | |
} | |
) | |
return config |
"mlp_only_layers": self.mlp_only_layers, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config = super().get_config() | ||
config.update( | ||
{ | ||
"num_query_heads": self.num_query_heads, | ||
"intermediate_dim": self.intermediate_dim, | ||
"moe_intermediate_dim": self.moe_intermediate_dim, | ||
"rope_max_wavelength": self.rope_max_wavelength, | ||
"num_key_value_heads": self.num_key_value_heads, | ||
"rope_scaling_factor": self.rope_scaling_factor, | ||
"layer_norm_epsilon": self.layer_norm_epsilon, | ||
"dropout": self.dropout, | ||
"sliding_window_size": self.sliding_window_size, | ||
"num_experts": self.num_experts, | ||
"top_k": self.top_k, | ||
"norm_top_k_prob": self.norm_top_k_prob, | ||
"router_aux_loss_coefficient": self.router_aux_loss_coefficient, | ||
} | ||
) | ||
return config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The get_config
method is missing several parameters from __init__
: head_dim
, is_sparse_mlp
, activation
, kernel_initializer
, and layer_index
. These are all necessary to correctly serialize and deserialize the layer.
config = super().get_config()
config.update(
{
"num_query_heads": self.num_query_heads,
"intermediate_dim": self.intermediate_dim,
"moe_intermediate_dim": self.moe_intermediate_dim,
"rope_max_wavelength": self.rope_max_wavelength,
"num_key_value_heads": self.num_key_value_heads,
"rope_scaling_factor": self.rope_scaling_factor,
"layer_norm_epsilon": self.layer_norm_epsilon,
"dropout": self.dropout,
"sliding_window_size": self.sliding_window_size,
"num_experts": self.num_experts,
"top_k": self.top_k,
"norm_top_k_prob": self.norm_top_k_prob,
"router_aux_loss_coefficient": self.router_aux_loss_coefficient,
"head_dim": self.head_dim,
"is_sparse_mlp": self.is_sparse_mlp,
"activation": keras.activations.serialize(self.activation),
"kernel_initializer": keras.initializers.serialize(
self.kernel_initializer
),
"layer_index": self.layer_index,
}
)
return config
def convert_backbone_config(transformers_config): | ||
return { | ||
"vocabulary_size": transformers_config["vocab_size"], | ||
"hidden_dim": transformers_config["hidden_size"], | ||
"head_dim": transformers_config["head_dim"], | ||
"num_layers": transformers_config["num_hidden_layers"], | ||
"num_query_heads": transformers_config["num_attention_heads"], | ||
"num_key_value_heads": transformers_config["num_key_value_heads"], | ||
"intermediate_dim": transformers_config["intermediate_size"], | ||
"moe_intermediate_dim": transformers_config["moe_intermediate_size"], | ||
"num_experts": transformers_config["num_experts"], | ||
"top_k": transformers_config["num_experts_per_tok"], | ||
"norm_top_k_prob": transformers_config["norm_topk_prob"], | ||
"decoder_sparse_step": transformers_config["decoder_sparse_step"], | ||
"layer_norm_epsilon": transformers_config["rms_norm_eps"], | ||
"rope_max_wavelength": transformers_config["rope_theta"], | ||
"use_sliding_window": transformers_config["use_sliding_window"], | ||
"sliding_window_size": transformers_config["sliding_window"], | ||
"output_router_logits": transformers_config["output_router_logits"], | ||
"router_aux_loss_coefficient": transformers_config[ | ||
"router_aux_loss_coef" | ||
], | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The convert_backbone_config
function has a couple of issues:
- It extracts
use_sliding_window
andoutput_router_logits
from the Hugging Face config, but these are not used byQwen3MoeBackbone
. - It's missing
tie_word_embeddings
, which is crucial for correct weight loading. Theconvert_weights
function depends on this value.
These can be fixed by updating the returned dictionary.
def convert_backbone_config(transformers_config):
return {
"vocabulary_size": transformers_config["vocab_size"],
"hidden_dim": transformers_config["hidden_size"],
"head_dim": transformers_config["head_dim"],
"num_layers": transformers_config["num_hidden_layers"],
"num_query_heads": transformers_config["num_attention_heads"],
"num_key_value_heads": transformers_config["num_key_value_heads"],
"intermediate_dim": transformers_config["intermediate_size"],
"moe_intermediate_dim": transformers_config["moe_intermediate_size"],
"num_experts": transformers_config["num_experts"],
"top_k": transformers_config["num_experts_per_tok"],
"norm_top_k_prob": transformers_config["norm_topk_prob"],
"decoder_sparse_step": transformers_config["decoder_sparse_step"],
"layer_norm_epsilon": transformers_config["rms_norm_eps"],
"sliding_window_size": transformers_config["sliding_window"],
"router_aux_loss_coefficient": transformers_config[
"router_aux_loss_coef"
],
"tie_word_embeddings": transformers_config.get("tie_word_embeddings", False),
}
dropout=0, | ||
layer_norm_epsilon=1e-5, | ||
sliding_window_size=4096, | ||
max_window_layers=28, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
def get_config(self): | ||
config = super().get_config() | ||
config.update({"epsilon": self.epsilon}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.