Skip to content

Commit 6847801

Browse files
z80maniacggerganov
andauthored
server : allow to specify tokens as strings in logit_bias (#5003)
* server: allow to specify tokens as strings in logit_bias * Apply suggestions from code review Co-authored-by: Georgi Gerganov <[email protected]> --------- Co-authored-by: Georgi Gerganov <[email protected]>
1 parent 85910c5 commit 6847801

File tree

2 files changed

+26
-8
lines changed

2 files changed

+26
-8
lines changed

examples/server/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ node index.js
185185

186186
`ignore_eos`: Ignore end of stream token and continue generating (default: false).
187187

188-
`logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced (default: []).
188+
`logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced. The tokens can also be represented as strings, e.g. `[["Hello, World!",-0.5]]` will reduce the likelihood of all the individual tokens that represent the string `Hello, World!`, just like the `presence_penalty` does. (default: []).
189189

190190
`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token (default: 0)
191191

examples/server/server.cpp

Lines changed: 25 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -626,18 +626,36 @@ struct llama_server_context
626626
const int n_vocab = llama_n_vocab(model);
627627
for (const auto &el : *logit_bias)
628628
{
629-
if (el.is_array() && el.size() == 2 && el[0].is_number_integer())
629+
if (el.is_array() && el.size() == 2)
630630
{
631-
llama_token tok = el[0].get<llama_token>();
632-
if (tok >= 0 && tok < n_vocab)
631+
float bias;
632+
if (el[1].is_number())
633633
{
634-
if (el[1].is_number())
634+
bias = el[1].get<float>();
635+
}
636+
else if (el[1].is_boolean() && !el[1].get<bool>())
637+
{
638+
bias = -INFINITY;
639+
}
640+
else
641+
{
642+
continue;
643+
}
644+
645+
if (el[0].is_number_integer())
646+
{
647+
llama_token tok = el[0].get<llama_token>();
648+
if (tok >= 0 && tok < n_vocab)
635649
{
636-
slot->sparams.logit_bias[tok] = el[1].get<float>();
650+
slot->sparams.logit_bias[tok] = bias;
637651
}
638-
else if (el[1].is_boolean() && !el[1].get<bool>())
652+
}
653+
else if (el[0].is_string())
654+
{
655+
auto toks = llama_tokenize(model, el[0].get<std::string>(), false);
656+
for (auto tok : toks)
639657
{
640-
slot->sparams.logit_bias[tok] = -INFINITY;
658+
slot->sparams.logit_bias[tok] = bias;
641659
}
642660
}
643661
}

0 commit comments

Comments
 (0)