Farkas-Minkowski-Weyl Lemma for Open Cones¶
Recall the Farkas-Minkowski-Weyl Lemma states that
Theorem
A cone is polyhedra if and only if it is finitely generated.
where
-
a polyhedra cone is a cone given by
\[ V = \{\mathbf{x} \in \mathbb{Q}^d : M \cdot \mathbf{x} \ge 0\} \]in which \(M \in \mathbb{Q}^{m \times d}\) is a rational matrix.
-
a finitely generated cone is given by
\[ V = \{\mathbf{x} \in \mathbb{Q}^d : \mathbf{x} = R \cdot \mathbf{t} \text{ for some } \mathbf{t} \ge 0 \} \]in which the columns of \(R\) are the generators, and the coefficients have to be non-negative.
Using this fact, we can argue that any cone (for example, the cone generated by simple cycle effects in a VASS) can be represented as an intersection of finitely many half spaces. Note that a half space is a set given by a normal vector \(\mathbf{a}\):
In the paper [CJLO25]1, the authors studied open cones, i.e. finitely generated cones where every coefficient must be positive:
It was claimed that \(V^\circ\) is always an intersection of finitely many open half spaces, but the proof (only for \(d = 3\)) was vague. Here we consider it in more rigor.
Fourier–Motzkin Elimination, Generalized¶
The technique of Fourier–Motzkin elimination is essential in the proof of Farkas-Minkowski-Weyl Lemma. In principle it is a technique to eliminate quantifiers in linear inequalities where only \(\ge\) and \(\le\) are allowed. Here we generalize it for, but not limited to, the purpose of describing open cones.
Consider a formula given in the following form:
where \(R\) is an \(d \times n\) rational matrix containing no zero column, \(L(\cdot)\) is a homogeneous linear mapping, i.e. a linear combination of its arguments, and \(\mathbf{x} \in \mathbb{Q}^d\) is a vector.
Remark
Note how formulae of this form can describe open cones. Consider an open cone generated by \(\mathbf{r}_1, \ldots, \mathbf{r}_n\). Then \(\mathbf{x} \in \text{Cone}_{>0}\{\mathbf{r}_1, \ldots, \mathbf{r}_n\}\) can be represented by \(\varphi(R, L, \mathbf{x})\) for \(R\) with columns exactly \(\mathbf{r}_1, \ldots, \mathbf{r}_n\) and \(L(\mathbf{x}, t_1, \ldots, t_n) = (t_1 \; \cdots \; t_n)\) being the projection mapping.
We aim at proving the following elimination lemma.
Lemma
For any matrix \(R\), homogeneous linear mapping \(L\), and any vector \(\mathbf{x}\), the formula \(\varphi(R, L, \mathbf{x})\) is equivalent to
for some matrix \(T\).
Proof. Below we prove the lemma. Let \(n\) be the number of variables with non-zero coefficient in the linear mapping \(L\), we will prove by induction on \(n\). The base case for \(n = 0\) holds trivially, so we consider general cases. Let \((\mathbf{r}_1 \; \cdots \; \mathbf{r}_n) = R\) be the column vectors of \(R\). Then the condition \(\mathbf{x} = R\cdot (t_1 \; \cdots \; t_n)^T\) is equivalent to that \(\mathbf{x} = \mathbf{r}_1 \cdot t_1 + \cdots + \mathbf{r}_n\cdot t_n\). We will eliminate the variable \(t_n\). Let \(i \in [d]\) be such that \(\mathbf{r}_n(i) \ne 0\). From the above equation one can deduce that
Note that \(T_n\) is a homogeneous linear mapping, thus we can replace the second condition \(L(\mathbf{x}, t_1, \ldots, t_n)\) with \(L(\mathbf{x}, t_1, \ldots, t_{n-1}, T_n(\mathbf{x}, t_1, \ldots, t_{n-1})) =: L_{n-1}(\mathbf{x}, t_1, \ldots, t_{n-1}))\).
From the above argument, we can verify that \(\varphi(R, L, \mathbf{x})\) is equivalent to \(\varphi(R, L_{n-1}, \mathbf{x})\). Now the lemma is proved through inductive hypothesis.
Remark
As a consequence, in the context of [CJLO25]1, we can derive that the cycle space of a 3-VASS, whose geometrical dimension is exactly \(3\), can be written as an intersection of finitely many open half spaces (defined by the rows of \(T\)).
Size amplification¶
Let \(\text{Size}(L)\) be the maximum absolute value in the numerator-denominator representation of the matrix of \(L\). From the expression of \(T_n\) we can observe that
After \(n\) iterations of elimination, the size of the final linear mapping \(T\) in the lemma is bounded by
I.e., the size amplification could be doubly exponential.
Question
How can we recover the size bound \(\text{Size}(L)^2\) for \(d = 3\) as claimed in [CJLO25]1?