Automatic Complexity

7 Logical depth and automatic complexity

7.1 Introduction

Logical depth was defined by Chaitin in 1977 [ 1 ] and further studied by Bennett in 1986. It is essentially the time it takes to verify a witness of Kolmogorov complexity. We demonstrate that for automatic complexity, logical depth arises as the difficulty of determining the unique solvability of certain knapsack problems.

A word is deep not so much if it has no simple description but rather if its simplest description is hard to understand and verify. For example, the task of finding the factorization

\[ 2001 = 3 \times 23 \times 29 \]

is laborious to find, but relatively simple to verify. In that sense, it is not deep. In this chapter we uncover a quite concrete instance of this idea of logical depth. To do so we replace Turing machines by finite automata. In the case of infinite sequences, finite-state depth has been studied by Jordon and Moser [ 5 ] , but we focus on the concrete world of finite words.

We replace Kolmogorov complexity with automatic complexity. The automatic complexity \(A(w)\) of a word \(w\) was studied by Shallit and Wang (2001) [ 15 ] . It is the minimum number of states of a DFA with certain properties and can be defined in a couple of in-equivalent variants.

Now, an NFA with a minimum number of edges accepting the word \(w\) and no other word of the same length must of course contain a walk on which \(w\) is accepted that visits every edge of \(M\). It is thus natural to require \(w\) to be the only word of length \(\lvert w\rvert \) accepted by \(M\) on an edge-covering walk, thus obtaining the edge-covering complexity \(E_{Nc}(w)\).