Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@
"featureShortDescription": {
"03" : "Time series forecasting",
"04" : "Question Answering",
"05" : "Sentiment analysis"
"05" : "Sentiment analysis",
"06" : "Text classification",
"07" : "Feature extraction",
"08" : "Text generation",
"12" : "Time series forecasting"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
<p>This page explains how to use Hugging Face sentiment analysis models in LEAN trading algorithms. These models classify financial text into sentiment categories like positive, negative, and neutral. The following models are available:</p>

<ul>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/ahmedrachid/FinancialBERT-Sentiment-Analysis">ahmedrachid/FinancialBERT-Sentiment-Analysis</a> &mdash; A BERT model fine-tuned on financial data for sentiment analysis, classifying text as positive, negative, or neutral.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/bardsai/finance-sentiment-fr-base">bardsai/finance-sentiment-fr-base</a> &mdash; A French-language financial sentiment classification model.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest">cardiffnlp/twitter-roberta-base-sentiment-latest</a> &mdash; A RoBERTa model fine-tuned on tweets for sentiment analysis.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis">mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis</a> &mdash; A DistilRoBERTa model fine-tuned on financial news for sentiment classification.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/nickmuchi/deberta-v3-base-finetuned-finance-text-classification">nickmuchi/deberta-v3-base-finetuned-finance-text-classification</a> &mdash; A DeBERTa model fine-tuned for financial text classification.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/nickmuchi/distilroberta-finetuned-financial-text-classification">nickmuchi/distilroberta-finetuned-financial-text-classification</a> &mdash; A DistilRoBERTa model fine-tuned for financial text classification.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/nickmuchi/sec-bert-finetuned-finance-classification">nickmuchi/sec-bert-finetuned-finance-classification</a> &mdash; A SEC-BERT model fine-tuned for financial document classification.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/StephanAkkerman/FinTwitBERT-sentiment">StephanAkkerman/FinTwitBERT-sentiment</a> &mdash; A BERT model fine-tuned on financial tweets for sentiment classification.</li>
</ul>

<p>All of these models accept text input and return classification labels with confidence scores. You can use them with the Hugging Face <code>transformers</code> library to analyze the sentiment of financial news and social media posts, then use the results to inform trading decisions.</p>
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
<p>
The following examples demonstrate usage of Hugging Face sentiment analysis models.
</p>
<h4>
Example 1: News Sentiment Trading
</h4>
<p>
The following algorithm selects the most volatile asset at the beginning of each month.
It gets the <a href="/datasets/tiingo-news-feed">Tiingo News</a> articles that were released for the asset over the previous 10 days and then feeds them into a sentiment analysis model.
It aggregates the sentiment scores of all the news releases.
If the aggregated sentiment is positive, it enters a long position for the month.
If it's negative, it enters a short position.
You can replace the model name with any of the sentiment analysis models listed on the introduction page.
</p>
<div class="section-example-container skip-test">
<pre class="python">from transformers import pipeline, set_seed

class SentimentAnalysisModelAlgorithm(QCAlgorithm):

def initialize(self):
self.set_start_date(2024, 9, 1)
self.set_end_date(2024, 12, 31)
self.set_cash(100_000)

self.universe_settings.resolution = Resolution.DAILY
self.universe_settings.schedule.on(self.date_rules.month_start("SPY"))
self._universe = self.add_universe(
lambda fundamental: [
self.history(
[f.symbol for f in sorted(
fundamental, key=lambda f: f.dollar_volume
)[-10:]],
timedelta(365), Resolution.DAILY
)['close'].unstack(0).pct_change().iloc[1:].std().idxmax()
]
)

set_seed(1, True)

# Load the sentiment analysis pipeline.
# Replace the model name with any supported sentiment model.
self._sentiment_pipeline = pipeline(
"text-classification",
model="mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis"
)

self._last_rebalance_time = datetime.min
self.set_warm_up(30, Resolution.DAILY)

def on_warmup_finished(self):
self._trade()
self.schedule.on(
self.date_rules.month_start("SPY", 1),
self.time_rules.midnight,
self._trade
)

def on_securities_changed(self, changes):
for security in changes.removed_securities:
self.remove_security(security.dataset_symbol)
for security in changes.added_securities:
security.dataset_symbol = self.add_data(
TiingoNews, security.symbol
).symbol

def _trade(self):
if (self.is_warming_up or
self.time - self._last_rebalance_time &lt; timedelta(14)):
return

# Get the target security.
security = self.securities[list(self._universe.selected)[0]]

# Get the latest news articles.
articles = self.history[TiingoNews](
security.dataset_symbol, 10, Resolution.DAILY
)
article_text = [
article.description for article in articles
if article.description
]
if not article_text:
return

# Run sentiment analysis on each article.
# Truncate long articles to the model's max length.
results = self._sentiment_pipeline(
article_text, truncation=True, max_length=512
)

# Aggregate sentiment scores.
positive_score = 0
negative_score = 0
for result in results:
label = result['label'].lower()
score = result['score']
if 'pos' in label:
positive_score += score
elif 'neg' in label:
negative_score += score

self.plot("Sentiment", "Positive", positive_score)
self.plot("Sentiment", "Negative", negative_score)

# Rebalance based on sentiment.
weight = 1 if positive_score &gt; negative_score else -0.25
self.set_holdings(
security.symbol, weight,
liquidate_existing_holdings=True
)
self._last_rebalance_time = self.time</pre>
</div>
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"type": "metadata",
"values": {
"description": "This page explains how to use Hugging Face sentiment analysis models in LEAN trading algorithms.",
"keywords": "sentiment analysis model, text classification, pre-trained AI model, financial sentiment, free AI models",
"og:description": "This page explains how to use Hugging Face sentiment analysis models in LEAN trading algorithms.",
"og:title": "Sentiment Analysis Models - Documentation QuantConnect.com",
"og:type": "website",
"og:site_name": "Sentiment Analysis Models - QuantConnect.com",
"og:image": "https://cdn.quantconnect.com/docs/i/writing-algorithms/machine-learning/hugging-face/popular-models/sentiment-analysis.png"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
<p>This page explains how to use Hugging Face fill-mask models in LEAN trading algorithms. Fill-mask models predict the most likely word to fill a masked position in a sentence. You can use them to extract text embeddings and build feature vectors from financial text. The following models are available:</p>

<ul>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/google-bert/bert-base-uncased">google-bert/bert-base-uncased</a> &mdash; The original BERT base model (uncased), widely used for natural language understanding tasks.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/distilbert/distilbert-base-uncased">distilbert/distilbert-base-uncased</a> &mdash; A distilled version of BERT that is 60% faster while retaining 97% of BERT's language understanding.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/FacebookAI/roberta-base">FacebookAI/roberta-base</a> &mdash; A robustly optimized BERT pretraining approach by Facebook AI.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/microsoft/deberta-base">microsoft/deberta-base</a> &mdash; A DeBERTa model by Microsoft that uses disentangled attention for improved language understanding.</li>
</ul>

<p>These models are useful for extracting text embeddings from financial news. You can feed these embeddings into a downstream classifier or use cosine similarity to measure the semantic similarity between documents.</p>
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
<p>
The following examples demonstrate usage of Hugging Face fill-mask models for feature extraction.
</p>
<h4>
Example 1: Embedding-Based News Similarity
</h4>
<p>
The following algorithm selects a volatile asset at the beginning of each month.
It uses a fill-mask model to extract embeddings from <a href="/datasets/tiingo-news-feed">Tiingo News</a> articles.
It then compares the average embedding of recent news to a reference "bullish" and "bearish" embedding.
If the recent news is more similar to the bullish reference, it enters a long position.
You can replace the model name with any of the fill-mask models listed on the introduction page.
</p>
<div class="section-example-container testable">
<pre class="python">import torch
import numpy as np
from transformers import AutoTokenizer, AutoModel, set_seed

class FillMaskEmbeddingAlgorithm(QCAlgorithm):

def initialize(self):
self.set_start_date(2024, 9, 1)
self.set_end_date(2024, 12, 31)
self.set_cash(100_000)

self.universe_settings.resolution = Resolution.DAILY
self.universe_settings.schedule.on(self.date_rules.month_start("SPY"))
self._universe = self.add_universe(
lambda fundamental: [
self.history(
[f.symbol for f in sorted(
fundamental, key=lambda f: f.dollar_volume
)[-10:]],
timedelta(365), Resolution.DAILY
)['close'].unstack(0).pct_change().iloc[1:].std().idxmax()
]
)

set_seed(1, True)

# Load the model and tokenizer.
# Replace with any fill-mask model (e.g., google-bert/bert-base-uncased).
model_name = "distilbert/distilbert-base-uncased"
self._tokenizer = AutoTokenizer.from_pretrained(model_name)
self._model = AutoModel.from_pretrained(model_name)
self._model.eval()

# Create reference embeddings for bullish/bearish text.
self._bullish_embedding = self._get_embedding(
"Stock prices surged on strong earnings and revenue growth."
)
self._bearish_embedding = self._get_embedding(
"Stock prices plunged on weak earnings and declining revenue."
)

self._last_rebalance_time = datetime.min
self.set_warm_up(30, Resolution.DAILY)

def on_warmup_finished(self):
self._trade()
self.schedule.on(
self.date_rules.month_start("SPY", 1),
self.time_rules.midnight,
self._trade
)

def on_securities_changed(self, changes):
for security in changes.removed_securities:
self.remove_security(security.dataset_symbol)
for security in changes.added_securities:
security.dataset_symbol = self.add_data(
TiingoNews, security.symbol
).symbol

def _get_embedding(self, text):
"""Extract the [CLS] token embedding from the model."""
inputs = self._tokenizer(
text, return_tensors="pt", truncation=True, max_length=512
)
with torch.no_grad():
outputs = self._model(**inputs)
# Use the [CLS] token (first token) embedding.
return outputs.last_hidden_state[:, 0, :].squeeze().numpy()

def _cosine_similarity(self, a, b):
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))

def _trade(self):
if (self.is_warming_up or
self.time - self._last_rebalance_time &lt; timedelta(14)):
return

# Get the target security.
security = self.securities[list(self._universe.selected)[0]]

# Get the latest news articles.
articles = self.history[TiingoNews](
security.dataset_symbol, 10, Resolution.DAILY
)
article_text = [
article.description for article in articles
if article.description
]
if not article_text:
return

# Get embeddings for each article and average them.
embeddings = [self._get_embedding(text) for text in article_text]
avg_embedding = np.mean(embeddings, axis=0)

# Compare to reference embeddings.
bullish_sim = self._cosine_similarity(
avg_embedding, self._bullish_embedding
)
bearish_sim = self._cosine_similarity(
avg_embedding, self._bearish_embedding
)

self.plot("Similarity", "Bullish", bullish_sim)
self.plot("Similarity", "Bearish", bearish_sim)

# Rebalance based on similarity.
weight = 1 if bullish_sim &gt; bearish_sim else -0.25
self.set_holdings(
security.symbol, weight,
liquidate_existing_holdings=True
)
self._last_rebalance_time = self.time</pre>
</div>
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"type": "metadata",
"values": {
"description": "This page explains how to use Hugging Face fill-mask models in LEAN trading algorithms.",
"keywords": "fill-mask model, feature extraction, pre-trained AI model, embeddings, free AI models",
"og:description": "This page explains how to use Hugging Face fill-mask models in LEAN trading algorithms.",
"og:title": "Fill-Mask Models - Documentation QuantConnect.com",
"og:type": "website",
"og:site_name": "Fill-Mask Models - QuantConnect.com",
"og:image": "https://cdn.quantconnect.com/docs/i/writing-algorithms/machine-learning/hugging-face/popular-models/fill-mask.png"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
<p>This page explains how to use Hugging Face text generation models in LEAN trading algorithms. These models generate text given an input prompt, which you can use for tasks like summarizing financial data or generating structured analysis. The following models are available:</p>

<ul>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/openai-community/gpt2">openai-community/gpt2</a> &mdash; The GPT-2 language model by OpenAI, a lightweight model suitable for text generation tasks.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/google/gemma-7b">google/gemma-7b</a> &mdash; A 7B parameter model from Google's Gemma family, offering strong text generation capabilities. Requires a GPU node.</li>
<li><a rel="nofollow" target="_blank" href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B">deepseek-ai/DeepSeek-R1-Distill-Llama-70B</a> &mdash; A 70B parameter reasoning model distilled from DeepSeek-R1 into a Llama architecture. Requires a GPU node with significant memory.</li>
</ul>

<p>Text generation models can analyze market context and generate structured outputs. You can prompt them to classify market conditions or extract trading signals from financial text. Note that larger models like Gemma-7B and DeepSeek-70B require GPU nodes with sufficient memory.</p>
Loading