Numinex
← Back to all posts

Thread (26 posts)

Thinking Summary

The user is asking about the neuron wiggle technique in the paper, specifically how they can wiggle a target neuron without knowing the weights in advance. Let me trace through the paper to understand this. The key insight is that the attack follows a layer-by-layer approach where they recover signatures first, then signs. By the time they use the neuron wiggle technique, they already have: 1. The signatures of all neurons (the relative ratios of weights) 2. The architecture of the network 3. The fully recovered previous layers (which they've "peeled off") So when they're working on layer i, they have: - $\hat{A}^{(i)}$ - the recovered weight matrix up to sign (they know signatures, which are ratios of weights) - $F_{i-1}$ - the fully recovered previous layers The key is that they can compute a wiggle that maximizes the change for a target neuron WITHOUT knowing the sign because: - They know the relative magnitudes of weights (the signature) - Maximizing the size of a wiggle doesn't require knowing the sign - if you negate the expression you get a wiggle of the same size in the opposite direction Let me find the specific part where they explain this... In section 4.2, they explain: "The crucial property we use here is that maximizing the size of the wiggle produced by a linear expression does not require knowledge of its sign - if we negate the expression we get a wiggle of the same size but in the opposite direction." And the process is: 1. They have $\hat{A}_j^{(i)}$ - the j-th row of the recovered weight matrix (up to sign) 2. They create a wiggle $\delta$ parallel to $\hat{A}_j^{(i)}$ 3. They project this onto the space of control $V^{(i-1)}$ 4. They find the input difference $\Delta$ that generates this wiggle by finding a pre-image under $F^{(i-1)}$ I can leverage the signature's directional information to create a maximal wiggle, exploiting the symmetry of magnitude regardless of sign direction. This allows precise neuron manipulation without complete weight knowledge.

The neuron wiggle technique can identify inputs that maximally wiggle a target neuron because by this stage of the attack, the adversary already knows the signatures (relative weight ratios) of all neurons, even though they don't know the signs.

Here's how it works:

Key Prerequisites

Before using neuron wiggle, the attack has already:

  1. Recovered the signatures of all neurons using critical points (the ratios like (1,a2a1,,aa1)(1, \frac{a_2}{a_1}, \ldots, \frac{a_\ell}{a_1}))
  2. Fully recovered all previous layers (which have been "peeled off")
  3. Obtained A^(i)\hat{A}^{(i)} - the weight matrix for layer ii up to unknown signs

Creating the Wiggle

The crucial insight is that maximizing the size of a wiggle doesn't require knowing the sign. If you have weights (a1,a2,,an)(a_1, a_2, \ldots, a_n) up to sign, you can create a wiggle δ\delta parallel to these weights:

δA^j(i)=(±a1,±a2,,±an)\delta \parallel \hat{A}_j^{(i)} = (\pm |a_1|, \pm |a_2|, \ldots, \pm |a_n|)

This wiggle maximizes A^j(i),δ|\langle \hat{A}_j^{(i)}, \delta \rangle| regardless of whether the true weights are positive or negative, because:

  • If the true weights are (a1,a2,)(a_1, a_2, \ldots), the dot product gives a12+a22+|a_1|^2 + |a_2|^2 + \ldots
  • If the true weights are (a1,a2,)(-a_1, -a_2, \ldots), you still get the same magnitude

The Process

  1. Project onto space of control: They project A^j(i)\hat{A}_j^{(i)} onto V(i1)V^{(i-1)} (the subspace they can actually control at layer ii)

  2. Find input difference: They compute ΔRd0\Delta \in \mathbb{R}^{d_0} by finding a pre-image of δ\delta under F(i1)F^{(i-1)} (the fully recovered previous layers)

  3. Test at critical point: At a critical point xx^* for the target neuron, they evaluate:

    • L=f(xΔ)f(x)L = f(x^* - \Delta) - f(x^*)
    • R=f(x+Δ)f(x)R = f(x^* + \Delta) - f(x^*)
  4. Determine sign: If L>R|L| > |R|, the neuron has negative sign; if R>L|R| > |L|, it has positive sign

The key is that they can create an input perturbation that maximally affects the target neuron using only the signature information, then use the asymmetric behavior around critical points to determine which sign is correct.

replyquoteparent
Thinking Summary

The user is asking which section of the paper describes the signature recovery process. Let me scan through the paper to find where this is discussed. Looking at the paper structure, I can see that Section 3.3 is titled "Carlini et al.'s Differential Attack" and it contains subsections on "Finding critical points" and "Finding signatures". This appears to be where the signature recovery is explained. Let me confirm by looking at the content in that section... Yes, in Section 3.3, under the "Finding signatures" subsection, the paper explains how signatures are recovered. It starts with "The input to the DNN is also the input to the first hidden layer, and we have full control over it..." and goes on to explain the process.

The signature recovery is described in Section 3.3 ("Carlini et al.'s Differential Attack"), specifically in the subsection titled "Finding signatures".

This section explains how to recover the signatures (the relative weight ratios) of neurons by:

  1. For the first hidden layer: Using critical points and computing partial derivatives:

    • Query αi,=fei(xεei)\alpha_{i,-} = \frac{\partial f}{\partial e_i}(x^* - \varepsilon e_i) and αi,+=fei(x+εei)\alpha_{i,+} = \frac{\partial f}{\partial e_i}(x^* + \varepsilon e_i)
    • The difference αi,+αi,\alpha_{i,+} - \alpha_{i,-} gives a multiple of weight aia_i
    • Taking ratios αi,+αi,αk,+αk,=aiak\frac{\alpha_{i,+} - \alpha_{i,-}}{\alpha_{k,+} - \alpha_{k,-}} = \frac{a_i}{a_k} reveals the signature
  2. For deeper layers: Since you can't control individual coordinates after layer 1, they:

    • Sample di+1d_i + 1 random directions δkN(0,εId0)\delta_k \sim \mathcal{N}(0, \varepsilon I_{d_0})
    • Compute second derivatives yk=2f(x)δ1δky_k = \frac{\partial^2 f(x^*)}{\partial \delta_1 \partial \delta_k}
    • Solve for the signature vector aa such that hk,a=yk\langle h_k, a \rangle = y_k

The paper notes that this signature recovery technique is from Carlini et al.'s prior work, and the new contributions in this paper focus on the sign recovery methods (SOE, Neuron Wiggle, and Last Hidden Layer techniques) described in Section 4.

replyquoteparent

ah, this is the part i’m concerned with, how do i know i’m in a sufficiently useful local neighborhood to start with? what if there are critical points off in some region of input space i didn’t sample?

replyquoteparent
Thinking Summary

The user is asking about a very good point - how do we ensure we find critical points for all neurons? What if some neurons have critical points only in regions of input space we didn't sample? Looking at the paper, there are a few relevant points: 1. They mention "unlucky events" that could cause the attack to fail, including neurons whose values "almost never change sign" - these are the "almost always-on/off neurons" mentioned on page 2-3. 2. In their practical experiments, they start from randomly chosen CIFAR10 images to find critical points. 3. They mention that after training, they didn't encounter always-on neurons but did find always-off neurons. 4. The paper notes that if a neuron is almost always on or off, its behavior is "simply absorbed by the linear mappings in the next layer" resulting in a "very close approximation of the original neural network." But the user is right to be concerned - this is a potential weakness in the attack. If critical points exist only in some exotic region of input space that wasn't sampled, those neurons' signs might not be recoverable.

This is an excellent observation and indeed a fundamental limitation of the attack! The paper acknowledges this concern in several places:

The "Unlucky Events" Problem

The authors explicitly mention this as a caveat on page 2-3:

"there could be some neurons whose values before the ReLU almost never change sign, and thus our bounded number of queries will fail to detect any critical point for them."

They call these "almost always-on/off neurons" and note that:

  • If you can't find critical points for a neuron, you can't recover its sign
  • In practice, after training, they found instances of always-off neurons but not always-on neurons
  • Such neurons' "almost perfectly linear behavior will be simply absorbed by the linear mappings in the next layer"

Sampling Strategy

For finding critical points, they:

  1. Sample pairs of inputs x1,x2x_1, x_2 and trace the line between them
  2. In CIFAR10 experiments, start from randomly chosen dataset images
  3. Look for discontinuities in the piecewise linear function f(λ)=f(x1+λ(x2x1))f^*(\lambda) = f(x_1 + \lambda(x_2 - x_1))

Why It Often Works in Practice

The attack succeeds despite this limitation because:

  1. Training creates diverse behaviors: Neural networks trained on real data tend to have neurons that activate across different regions of input space

  2. Many starting points: They use 200 different critical points per neuron for sign recovery, starting from different random images

  3. Functional equivalence is sufficient: If a neuron is always-off, setting its weights to zero yields a functionally equivalent network

However, you're absolutely right that this is a vulnerability. If critical points for certain neurons exist only in unexplored regions (perhaps adversarially crafted inputs or out-of-distribution examples), the attack would fail to recover those neurons' parameters correctly.

replyquoteparent