Skip to content

Farkas-Minkowski-Weyl Lemma for Open Cones

Recall the Farkas-Minkowski-Weyl Lemma states that

Theorem

A cone is polyhedra if and only if it is finitely generated.

where

  • a polyhedra cone is a cone given by

    \[ V = \{\mathbf{x} \in \mathbb{Q}^d : M \cdot \mathbf{x} \ge 0\} \]

    in which \(M \in \mathbb{Q}^{m \times d}\) is a rational matrix.

  • a finitely generated cone is given by

    \[ V = \{\mathbf{x} \in \mathbb{Q}^d : \mathbf{x} = R \cdot \mathbf{t} \text{ for some } \mathbf{t} \ge 0 \} \]

    in which the columns of \(R\) are the generators, and the coefficients have to be non-negative.

Using this fact, we can argue that any cone (for example, the cone generated by simple cycle effects in a VASS) can be represented as an intersection of finitely many half spaces. Note that a half space is a set given by a normal vector \(\mathbf{a}\):

\[ H = \{ \mathbf{x} \in \mathbb{Q}^d : \mathbf{a} \cdot \mathbf{x} \ge 0 \} \]

In the paper [CJLO25]1, the authors studied open cones, i.e. finitely generated cones where every coefficient must be positive:

\[ V^\circ = \{\mathbf{x} \in \mathbb{Q}^d : \mathbf{x} = R \cdot \mathbf{t} \text{ for some } \mathbf{t} > 0 \} \]

It was claimed that \(V^\circ\) is always an intersection of finitely many open half spaces, but the proof (only for \(d = 3\)) was vague. Here we consider it in more rigor.

Fourier–Motzkin Elimination, Generalized

The technique of Fourier–Motzkin elimination is essential in the proof of Farkas-Minkowski-Weyl Lemma. In principle it is a technique to eliminate quantifiers in linear inequalities where only \(\ge\) and \(\le\) are allowed. Here we generalize it for, but not limited to, the purpose of describing open cones.

Consider a formula given in the following form:

\[ \varphi(R, L, \mathbf{x}) := \exists t_1, \ldots, \exists t_n.~ \mathbf{x} = R \cdot (t_1 \; \cdots \; t_n)^T \land L(\mathbf{x}, t_1, \ldots, t_n) > \mathbf{0} \]

where \(R\) is an \(d \times n\) rational matrix containing no zero column, \(L(\cdot)\) is a homogeneous linear mapping, i.e. a linear combination of its arguments, and \(\mathbf{x} \in \mathbb{Q}^d\) is a vector.

Remark

Note how formulae of this form can describe open cones. Consider an open cone generated by \(\mathbf{r}_1, \ldots, \mathbf{r}_n\). Then \(\mathbf{x} \in \text{Cone}_{>0}\{\mathbf{r}_1, \ldots, \mathbf{r}_n\}\) can be represented by \(\varphi(R, L, \mathbf{x})\) for \(R\) with columns exactly \(\mathbf{r}_1, \ldots, \mathbf{r}_n\) and \(L(\mathbf{x}, t_1, \ldots, t_n) = (t_1 \; \cdots \; t_n)\) being the projection mapping.

We aim at proving the following elimination lemma.

Lemma

For any matrix \(R\), homogeneous linear mapping \(L\), and any vector \(\mathbf{x}\), the formula \(\varphi(R, L, \mathbf{x})\) is equivalent to

\[ \mathbf{x} \in \text{ColSpan}(R) \land T\mathbf{x} > \mathbf{0} \]

for some matrix \(T\).

Proof. Below we prove the lemma. Let \(n\) be the number of variables with non-zero coefficient in the linear mapping \(L\), we will prove by induction on \(n\). The base case for \(n = 0\) holds trivially, so we consider general cases. Let \((\mathbf{r}_1 \; \cdots \; \mathbf{r}_n) = R\) be the column vectors of \(R\). Then the condition \(\mathbf{x} = R\cdot (t_1 \; \cdots \; t_n)^T\) is equivalent to that \(\mathbf{x} = \mathbf{r}_1 \cdot t_1 + \cdots + \mathbf{r}_n\cdot t_n\). We will eliminate the variable \(t_n\). Let \(i \in [d]\) be such that \(\mathbf{r}_n(i) \ne 0\). From the above equation one can deduce that

\[ t_n = \frac{1}{ \mathbf r_n(i)} \bigl[ \mathbf x - ( r_1 \cdot t_1 + \cdots + {\mathbf r_{n-1}} \cdot t_{n-1} ) \bigr] =: T_n(\mathbf{x}, t_1, \ldots, t_{n-1}) \]

Note that \(T_n\) is a homogeneous linear mapping, thus we can replace the second condition \(L(\mathbf{x}, t_1, \ldots, t_n)\) with \(L(\mathbf{x}, t_1, \ldots, t_{n-1}, T_n(\mathbf{x}, t_1, \ldots, t_{n-1})) =: L_{n-1}(\mathbf{x}, t_1, \ldots, t_{n-1}))\).

From the above argument, we can verify that \(\varphi(R, L, \mathbf{x})\) is equivalent to \(\varphi(R, L_{n-1}, \mathbf{x})\). Now the lemma is proved through inductive hypothesis.

Remark

As a consequence, in the context of [CJLO25]1, we can derive that the cycle space of a 3-VASS, whose geometrical dimension is exactly \(3\), can be written as an intersection of finitely many open half spaces (defined by the rows of \(T\)).

Size amplification

Let \(\text{Size}(L)\) be the maximum absolute value in the numerator-denominator representation of the matrix of \(L\). From the expression of \(T_n\) we can observe that

\[ \text{Size}(L_{n-1}) = O(\text{Size}(L)^2) \]

After \(n\) iterations of elimination, the size of the final linear mapping \(T\) in the lemma is bounded by

\[ \text{Size}(T) = \text{Size}(L)^{ 2^{O(d)} } \]

I.e., the size amplification could be doubly exponential.

Question

How can we recover the size bound \(\text{Size}(L)^2\) for \(d = 3\) as claimed in [CJLO25]1?


  1. [CJLO25]: W. Czerwiński, I. Jecker, S. Lasota, and Ł. Orlikowski, “Reachability in 3-VASS is Elementary,” Apr. 28, 2025, arXiv: arXiv:2502.13916. doi: 10.48550/arXiv.2502.13916.