Artificial intelligence applications rely on knowledge about a relevant real-world domain that is encoded in a knowledge base (KB) employing some logical knowledge representation language. The most essential benefit of logical KBs is the opportunity to perform automatic reasoning to derive implicit knowledge or to answer complex queries about the modeled domain. The feasibility of meaningful reasoning requires KBs to meet some minimal quality criteria such as logical consistency. Without adequate tool assistance, the task of resolving violated quality criteria in KBs can be extremely tough even for domain experts, especially when the problematic KB includes a large number of logical formulas, comprises complicated logical formalisms, was developed by multiple people or in a distributed fashion or was (partially) generated by use of (error-prone) automatic systems. Non-interactive debugging systems published in research literature often cannot localize all possible faults (incompleteness), suggest the deletion or modification of unnecessarily large parts of the KB (non-minimality), return incorrect solutions which lead to a repaired KB not satisfying the imposed quality requirements (unsoundness) or suffer from poor scalability due to the inherent complexity of the KB debugging problem. Even if a system is complete and sound and considers only minimal solutions, there are generally exponentially many solution candidates to select one from. However, any two repaired KBs obtained from these candidates differ in their semantics in terms of entailments and non-entailments. Selection of just any of these repaired KBs might result in unexpected entailments, the loss of desired entailments or unwanted changes to the KB which in turn might cause unexpected new faults during the further development or application of the repaired KB. Also, manual inspection of a large set of solution candidates can be time-consuming (if not practically infeasible), tedious and error-prone since human beings are normally not capable of fully realizing the semantic consequences of deleting or modifying a set of formulas in a KB. This work proposes complete and sound methods for the interactive debugging of KBs that suggest the one (minimally invasive) error correction of the problematic KB that yields a repaired KB with exactly the intended semantics. Users, e.g. domain experts, are involved in the debugging process by answering automatically generated queries about the intended domain. This is the first work that formalizes the (interactive) KB debugging problem and details an entire system of provably correct algorithms for the query-based interactive debugging of monotonic KBs. Moreover, two new algorithms for the iterative computation of candidate solutions are introduced, formally proved correct and critically discussed in depth, one that guarantees a monotonic reduction of the number of remaining solutions after the incorporation of any new answered query, and another one that features more powerful search space pruning techniques than existing methods. Additionally, we suggest and thoroughly analyze different approaches for selecting optimal queries, we present learning strategies that often lighten the interacting user's workload drastically and almost always outperform non-learning methods, and we elaborate on mechanisms for efficiently handling KB debugging problems involving high-cardinality faults.