You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Understanding the relationship between textual news and time-series evolution is a critical yet under-explored challenge in applied data science. While multimodal learning has gained traction, existing multimodal time-series datasets fall short in evaluating cross-modal reasoning and complex question answering, which are essential for capturing complex interactions between narrative information and temporal patterns. To bridge this gap, we introduce **M**ultimodal **T**ime Series **Bench**mark **MTBench**, a large-scale benchmark designed to evaluate large language models (LLMs) on time series and text understanding across financial and weather domains. MTBench comprises of paired time-series and textual data, including financial news with corresponding stock price movements and weather reports aligned with historical temperature records. Unlike existing benchmarks that focus on isolated modalities, MTBench provides a comprehensive testbed for models to jointly reason over structured numerical trends and unstructured textual narratives. The richness of MTBench enables formulation of diverse tasks that require a deep understanding of both text and time-series data, including time-series forecasting, semantic and technical trend analysis, and news-driven question answering (QA). These tasks target the model’s ability to capture temporal dependencies, extract key insights from textual context, and integrate cross-modal information. We evaluate state-of-the-art LLMs on MTBench, analyzing their effectiveness in modeling the complex relationships between news narratives and temporal patterns. Our findings reveal significant challenges in current models, including difficulties in capturing long-term dependencies, interpreting causality in financial and weather trends, and effectively fusing multimodal information.
20
20
21
21
22
-
## Dataset Overview
22
+
Figuring out how news articles influence changes in time-series data (like stock prices or weather trends) is an important but still under-explored area in applied data science. While AI models that handle multiple types of data (like text and numbers) are becoming more popular, most existing datasets don’t do a great job of testing how well these models can connect information across different formats.
23
+
24
+
To address this challenge, we introduce **MTBench** (**M**ultimodal **T**ime Series **Bench**mark), a large-scale dataset designed to test how well large language models (LLMs) understand both time-series data and text in financial and weather-related contexts. MTBench pairs numerical data with relevant text—for example, financial news linked to stock price changes and weather reports aligned with historical temperature trends. Unlike existing benchmarks that focus on either text or numbers separately, MTBench challenges models to analyze both together, helping assess their ability to recognize patterns, make predictions, and answer questions based on cross-modal reasoning. This enables a wide range of tasks, such as forecasting trends, interpreting news impact on data, and extracting meaningful insights from both structured (numerical) and unstructured (textual) information.
25
+
26
+
We test the latest large language models (LLMs) on MTBench to see how well they understand the connection between news stories and time-based trends. Our results highlight major challenges—these models struggle to recognize long-term patterns, understand cause-and-effect relationships in financial and weather data, and seamlessly combine insights from both text and numerical information.
27
+
28
+
## Dataset Collection
23
29
24
30
### Finance Dataset
25
-

31
+

26
32
27
-
The pipeline of finance dataset collection is shown in Figure 2. To construct a diverse stock news and time-series dataset, we collected over 200,000 financial news article URLs from professional financial websites, including GlobeNews, Market-Watch, SeekingAlpha, Zacks, Invezz, Quartz (QZ), PennyStocks, and Benzinga, covering the period from May 2021 to September 2023. We then scraped and parsed the corresponding textual content, titles, stock names, and publishing dates from these URLs. From this collection, we derived a 20,000-news subset while ensuring a balanced distribution of article lengths. To enrich the dataset with structured metadata, we employed GPT-4o to annotate each article with news content type, temporal effect range, and sentiment.
28
-
**Stock Time-Series Collection.** For each financial news article, we identified the corresponding stock time-series data by utilizing the extracted sentiment and stock name. The historical stock price data
29
-
was retrieved with open prices sampled at varying granularities. To ensure data quality, we discarded samples where stock price
30
-
data was missing for more than 70% of the time period due to market closures (e.g., holidays, weekends). To construct aligned input-output time-series pairs, we assumed that each news article happens at the 0.9 percentile of its input time-series window. We curated two forecasting settings:
31
-
**Short-Term Prediction**: Use 7 days of stock prices at a 5-minute granularity to predict the next 1-day price movements.
32
-
**Long-Term Prediction**: Use 30 days of stock prices at a 1-hour granularity to predict the next 7 days’ stock movements.
33
+
The process of collecting the finance dataset is illustrated in Figure 1. To build a diverse dataset linking stock market news with time-series data, we gathered over 200,000 financial news article URLs from reputable sources such as GlobeNews, MarketWatch, SeekingAlpha, Zacks, Invezz, Quartz (QZ), PennyStocks, and Benzinga, spanning May 2021 to September 2023. We then scraped key details from these articles, including text content, titles, stock names, and publication dates. From this collection, we selected a subset of 20,000 news articles, ensuring a balanced distribution of article lengths.
33
34
34
-
### Weather Dataset
35
-
We selected 50 airports in the United States as data sources, using the Global Historical Climatology Network Hourly (GHCN-H) dataset. The data spans from 2003 to 2020 and is collected hourly. Each weather station records multiple attributes, including geographical location, temperature, humidity, wind speed, wind direction, visibility, pressure, and precipitation. Airports were chosen due to the higher reliability and accuracy of their weather data compared to other stations. In this study, we focus on single-channel data, specifically temperature, as it is the most critical parameter for weather forecasting. Meanwhile, within our raw data creation pipeline, additional channels are available, allowing for future expansion to multi-channel weather analysis.
36
-
Unlike stock price datasets, systematically collecting weather-related news is challenging, as routine weather reports may not provide sufficient context for complex reasoning. To address this, we use the Storm Events Database, which documents storm occurrences in the United States from 1950 to 2020. This dataset includes details such as storm type, location, fatalities, and injuries, covering a range of severe weather conditions, such as hail, tornadoes, thunderstorms, floods, hurricanes, and typhoons.
37
-
Each entry contains an <em>event ID</em> and an <em>episode ID</em>, where the event ID uniquely identifies an occurrence, and the <em>episode ID</em> links related events. For example, a hurricane may trigger multiple tornadoes, hailstorms, and thunderstorms, all grouped under the same <em>episode ID</em>. Each event also includes a textual description, providing valuable contextual information.
35
+
To enhance the dataset with structured metadata, we used GPT-4o to annotate each article with its content type, the time range of its impact, and sentiment analysis.
36
+
37
+
**Stock Time-Series Collection.**
38
38
39
-
## Experiments
39
+
For each financial news article, we identified relevant stock time-series data based on the extracted sentiment and stock name. We retrieved historical stock prices with opening values sampled at different time scales. To maintain high data quality, we excluded cases where stock price data was missing for more than 70% of the timeframe, often due to market closures (e.g., weekends, holidays).
40
40
41
-
### Time-series Forecasting
42
-

43
-

41
+
To align news articles with stock trends, we assumed that each article corresponds to the 90th percentile of its input time-series window. We created two forecasting scenarios:
42
+
**Short-Term Prediction**: Using 7 days of stock price data at a 5-minute resolution to predict price movements for the next day.
43
+
**Long-Term Prediction**: Using 30 days of stock price data at a 1-hour resolution to forecast stock movements for the following 7 days.
44
44
45
+
### Weather Dataset
45
46
46
-
### Semantic Trend Prediction
47
-

48
-

47
+
We selected 50 airports across the United States as data sources, using the Global Historical Climatology Network Hourly (GHCN-H) dataset. The data, which spans from 2003 to 2020, is collected hourly. Each weather station records various attributes, including geographic location, temperature, humidity, wind speed, wind direction, visibility, pressure, and precipitation. Airports were chosen because their weather data is generally more reliable and accurate than data from other stations. In this study, we focus on temperature as the primary parameter, since it is a key factor in weather forecasting. However, our raw data pipeline includes additional channels, allowing for future expansion into multi-channel weather analysis.
49
48
49
+
Unlike stock price data, systematically collecting weather-related news is challenging, as routine reports often lack the context needed for complex analysis. To overcome this, we use the Storm Events Database, which records storm occurrences in the United States from 1950 to 2020. This dataset includes details such as storm type, location, fatalities, and injuries, covering various severe weather conditions, including hail, tornadoes, thunderstorms, floods, hurricanes, and typhoons.
50
50
51
-
### Technical Indicator Calculation
52
-

53
-

54
-

51
+
Each entry in the database contains an <em>event ID</em>, which uniquely identifies an occurrence, and an <em>episode ID</em>, which links related events. For example, a hurricane might trigger multiple tornadoes, hailstorms, and thunderstorms, all grouped under the same episode ID. Each event also includes a textual description, offering valuable contextual information.
0 commit comments