Apr 20
Characterising LLM-Generated Competency Questions: a Cross-Domain Empirical Study using Open and Closed Models
★★★★★
significance 2/5
This research paper investigates the quality and characteristics of Competency Questions (CQs) generated by various large language models. The study compares open and closed models across different domains to measure readability, relevance, and structural complexity.
Why it matters
Understanding the structural reliability of LLM-generated queries is critical for developing automated, high-fidelity evaluation frameworks for specialized domain knowledge.
Tags
#llm #ontology engineering #competency questions #empirical studyRelated coverage
- Global South OpportunitiesPivotal Research Fellowship 2026 (Q3): AI Safety Research Opportunity - Global South Opportunities
- arXiv cs.AIAn Intelligent Fault Diagnosis Method for General Aviation Aircraft Based on Multi-Fidelity Digital Twin and FMEA Knowledge Enhancement
- arXiv cs.AIPExA: Parallel Exploration Agent for Complex Text-to-SQL
- arXiv cs.AIThe Power of Power Law: Asymmetry Enables Compositional Reasoning
- arXiv cs.AIOn the Existence of an Inverse Solution for Preference-Based Reductions in Argumentation