Sentence Transformers are models built on top of transformer architectures like BERT to produce Semantic Relationships meaningful vector embeddings of sentences. They are optimized for tasks like semantic similarity, clustering, and information retrieval.
Use Cases:
- Embedding sentences for semantic search
- Clustering short texts
- Measuring similarity between queries and documents
Exploratory Questions:
- How does fine-tuning affect embedding quality?
- When should I use Sentence-BERT over plain BERT?
- See retrained models like all-MiniLM vs mpnet?