One of the fundamental tasks of natural language processing (NLP) is to determine whether two words have a similar meaning to one another, and how closely they are related.
Why would you need this? If you were analyzing threat reports, you'd want to be altered to the word "bomb," but wouldn't you also want your text analytics solution to notify you about words like "explosive" and "incendiary device?"
As humans, we can recognize and compare words, but machines have difficulty decoding the meaning of words. That's why NLP tools need word embeddings, which represent the meaning of words as numeric vectors that can be added, subtracted and compared mathematically.
During this webinar, BasisTech Chief Scientist, Kfir Bar, will explain the concepts around word embeddings and how they apply to real-life situation. Attendees learned:
- A brief history of the advancements in semantic technology
- What word embeddings enable us to do that we couldn't before
- How word meanings are calculated and compared
- How word embeddings enable multilingual semantic searches
- How semantic similarity boosts AI for extracting entities, matching names, and understanding events