Förderjahr 2021 / Stipendien Call #16 / ProjektID: 5761 / Projekt: Automated Software Verification with First-Order Theorem Provers
QuickSort is well-known example for making the case of recursion. QuickSort relies on a divide-and-conquer strategy to sort a list of items efficiently. Recursion is a natural way to express this strategy.
QuickSort is well-known example for making the case of recursion. Recursion is necessary in the QuickSort algorithm because QuickSort relies on a divide-and-conquer strategy to sort a list of items efficiently. Recursion is a natural way to express this strategy.
Specifically QuickSort's primary strategy is to divide a large sorting problem into smaller subproblems, sort those subproblems, and then combine the results to obtain the sorted list. Recursion is a powerful technique for solving problems that can be broken down into smaller, similar instances of the same problem. In each recursive step, a pivot element is chosen, and the list is partitioned into two sublists: elements smaller than the pivot and elements larger than the pivot. The recursive calls then sort these sublists. Recursion is essential to handle these two sublists effectively.
Let’s look at a functional version of QuickSort written in Haskell syntax where the pivot for simplicity is the first element of the list:
Pseudocode (Haskell Syntax)
quickSort :: (Ord a) => [a] -> [a] quickSort  =  -- Base case: An empty list is already sorted quickSort (x:xs) = let smaller = filter (<= x) xs -- Elements smaller than or equal to the pivot larger = filter (> x) xs -- Elements larger than the pivot in quickSort smaller ++ [x] ++ quickSort larger -- Example usage: unsortedList :: [Int] -- You can change 'Int' to any desired type 'a' unsortedList = [5, 2, 9, 3, 6] sortedList :: [Int] sortedList = quickSort unsortedList
How this code works:
- The quickSort function is defined with a type constraint (Ord a), which means it works for any type 'a' that has an Ord instance, i.e., types that can be compared for ordering. We call this a partial order, e.g. <, >, ≤ etc..)
- The base case is when the list is empty, in which case it returns an empty list since it's already sorted.
- For non-empty lists, it selects the first element 'x' as the pivot, filters elements into smaller and larger lists using the filter function, and recursively applies quickSort to those sublists.
- The sorted sublists are concatenated with the pivot element between them to produce the final sorted list.
You can use this Haskell code with lists of elements of any type that has a partial order defined. Just replace a with the desired type, e.g. Int, when declaring the list.
What makes the functional version “nicer”?
Iterative and recursive implementations of the QuickSort algorithm both aim to sort a list of elements, but they use different control flow structures to achieve this goal. In iterative QuickSort, the sorting algorithm is implemented using loops and an explicit stack or queue data structure. Instead of making recursive function calls, the algorithm uses an iterative structure to manage the partitioning and sorting process. To mimic the function call stack that recursion naturally uses, this stack is employed to keep track of subproblems, the state of partitions, and their corresponding indices.
While in terms of space efficiency, iterative versions can be more memory-efficient as it avoids the overhead of recursive function calls, their implementation is often more complex to write and understand due to the need for managing the explicit data structure and state transitions.
In recursive QuickSort, the sorting algorithm uses recursive function calls to solve the problem. It divides the list into smaller sublists and sorts them through function calls. The recursion naturally handles partitioning and sorting of sublists as each recursive call works on a smaller part of the original list. Recursive calls implicitly maintain the state of the partitions. Recursive QuickSort can be more straightforward to implement and understand due to its natural expression of the divide-and-conquer approach.
In terms of sorting performance, both iterative and recursive QuickSort algorithms have the same time complexity, which is typically O(n*log(n)) on average. The choice between them often depends on factors like language or programming style preferences, memory constraints, and the specifics of the problem you are trying to solve. If memory is a concern, an iterative approach might be preferred. If simplicity and clarity are more important, the recursive approach is often favored.
What does it mean for QuickSort to be correct?
- Sortedness: first, we need to make sure that the output of the computation is sorted. So for an integer-based list, we most likely want to prove that each element is smaller than or equal to the next element.
- Permutation equivalence: secondly, we need to establish that the algorithm doesn’t add or eat any elements of the input list. This means we need to check that the output of the computation is a permutation of the original input list.
In our case, we will investigate how we can prove the recursive version of QuickSort given lists with type parameterization a in automated first-order theorem proving. This is powerful first step into establishing software correctness of problems containing recursion as a basis for computation.
Forschungsinteressen: Automated Software Verification, Program Analysis, Automated Deduction, Theorem Proving, Formal Methods.