When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain (2024)

Raj Shah,Kunal Chawla,Dheeraj Eidnani,Agam Shah,Wendi Du,Sudheer Chava,Natraj Raman,Charese Smiley,Jiaao Chen,Diyi Yang

Abstract

Pre-trained language models have shown impressive performance on a variety of tasks and domains. Previous research on financial language models usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LANGuage model (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. Our models, code and benchmark data will be made publicly available on Github and Huggingface.

Anthology ID:
2022.emnlp-main.148
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg,Zornitsa Kozareva,Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2322–2335
Language:
URL:
https://aclanthology.org/2022.emnlp-main.148
DOI:
10.18653/v1/2022.emnlp-main.148
Bibkey:
Cite (ACL):
Raj Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, and Diyi Yang. 2022. When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2322–2335, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain (Shah et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.148.pdf

PDFCiteSearch

Export citation
  • BibTeX
  • MODS XML
  • Endnote
  • Preformatted
@inproceedings{shah-etal-2022-flue, title = "When {FLUE} Meets {FLANG}: Benchmarks and Large Pretrained Language Model for Financial Domain", author = "Shah, Raj and Chawla, Kunal and Eidnani, Dheeraj and Shah, Agam and Du, Wendi and Chava, Sudheer and Raman, Natraj and Smiley, Charese and Chen, Jiaao and Yang, Diyi", editor = "Goldberg, Yoav and Kozareva, Zornitsa and Zhang, Yue", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.148", doi = "10.18653/v1/2022.emnlp-main.148", pages = "2322--2335", abstract = "Pre-trained language models have shown impressive performance on a variety of tasks and domains. Previous research on financial language models usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LANGuage model (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. Our models, code and benchmark data will be made publicly available on Github and Huggingface.",}

Download as File

<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3"><mods ID="shah-etal-2022-flue"> <titleInfo> <title>When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain</title> </titleInfo> <name type="personal"> <namePart type="given">Raj</namePart> <namePart type="family">Shah</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Kunal</namePart> <namePart type="family">Chawla</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Dheeraj</namePart> <namePart type="family">Eidnani</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Agam</namePart> <namePart type="family">Shah</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Wendi</namePart> <namePart type="family">Du</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Sudheer</namePart> <namePart type="family">Chava</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Natraj</namePart> <namePart type="family">Raman</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Charese</namePart> <namePart type="family">Smiley</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Jiaao</namePart> <namePart type="family">Chen</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Diyi</namePart> <namePart type="family">Yang</namePart> <role> <roleTerm authority="marcrelator" type="text">author</roleTerm> </role> </name> <originInfo> <dateIssued>2022-12</dateIssued> </originInfo> <typeOfResource>text</typeOfResource> <relatedItem type="host"> <titleInfo> <title>Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing</title> </titleInfo> <name type="personal"> <namePart type="given">Yoav</namePart> <namePart type="family">Goldberg</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Zornitsa</namePart> <namePart type="family">Kozareva</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <name type="personal"> <namePart type="given">Yue</namePart> <namePart type="family">Zhang</namePart> <role> <roleTerm authority="marcrelator" type="text">editor</roleTerm> </role> </name> <originInfo> <publisher>Association for Computational Linguistics</publisher> <place> <placeTerm type="text">Abu Dhabi, United Arab Emirates</placeTerm> </place> </originInfo> <genre authority="marcgt">conference publication</genre> </relatedItem> <abstract>Pre-trained language models have shown impressive performance on a variety of tasks and domains. Previous research on financial language models usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LANGuage model (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. Our models, code and benchmark data will be made publicly available on Github and Huggingface.</abstract> <identifier type="citekey">shah-etal-2022-flue</identifier> <identifier type="doi">10.18653/v1/2022.emnlp-main.148</identifier> <location> <url>https://aclanthology.org/2022.emnlp-main.148</url> </location> <part> <date>2022-12</date> <extent unit="page"> <start>2322</start> <end>2335</end> </extent> </part></mods></modsCollection>

Download as File

%0 Conference Proceedings%T When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain%A Shah, Raj%A Chawla, Kunal%A Eidnani, Dheeraj%A Shah, Agam%A Du, Wendi%A Chava, Sudheer%A Raman, Natraj%A Smiley, Charese%A Chen, Jiaao%A Yang, Diyi%Y Goldberg, Yoav%Y Kozareva, Zornitsa%Y Zhang, Yue%S Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing%D 2022%8 December%I Association for Computational Linguistics%C Abu Dhabi, United Arab Emirates%F shah-etal-2022-flue%X Pre-trained language models have shown impressive performance on a variety of tasks and domains. Previous research on financial language models usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LANGuage model (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. Our models, code and benchmark data will be made publicly available on Github and Huggingface.%R 10.18653/v1/2022.emnlp-main.148%U https://aclanthology.org/2022.emnlp-main.148%U https://doi.org/10.18653/v1/2022.emnlp-main.148%P 2322-2335

Download as File

Markdown (Informal)

[When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain](https://aclanthology.org/2022.emnlp-main.148) (Shah et al., EMNLP 2022)

  • When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain (Shah et al., EMNLP 2022)
ACL
  • Raj Shah, Kunal Chawla, Dheeraj Eidnani, Agam Shah, Wendi Du, Sudheer Chava, Natraj Raman, Charese Smiley, Jiaao Chen, and Diyi Yang. 2022. When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2322–2335, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
When FLUE Meets FLANG: Benchmarks and Large Pretrained Language Model for Financial Domain (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Nathanial Hackett

Last Updated:

Views: 6180

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Nathanial Hackett

Birthday: 1997-10-09

Address: Apt. 935 264 Abshire Canyon, South Nerissachester, NM 01800

Phone: +9752624861224

Job: Forward Technology Assistant

Hobby: Listening to music, Shopping, Vacation, Baton twirling, Flower arranging, Blacksmithing, Do it yourself

Introduction: My name is Nathanial Hackett, I am a lovely, curious, smiling, lively, thoughtful, courageous, lively person who loves writing and wants to share my knowledge and understanding with you.