Model reconstruction
Thread (26 posts)
In the paper there’s an intuition building figure showing a 2D input space and plotting a function along a line interpolating two input points. The function is real-valued so it can be plotted in 2D and put in a chart. That chart has sharp corners corresponding to the patches of the activation functions.
But in a real system the input and output are high dimensional vectors correct? How does the method in the paper find the corners of a function whose output is high dimensional?
Thinking Summary
The user is asking about how the method in the paper handles finding critical points when the output is high-dimensional. They're referring to Figure 2 in the paper, which shows a nice 2D visualization of how the function changes along a line between two input points, with visible sharp corners where neurons toggle between active/inactive states. The key insight is that while the output is indeed a high-dimensional vector, each individual output coordinate is still a scalar function that can be analyzed along the line between two input points. The paper doesn't need to find corners in a high-dimensional output space - instead, it finds corners in the scalar functions corresponding to each output coordinate. Let me explain this clearly without using analogies.
You're right that in real neural networks both the input and output are high-dimensional vectors. The key insight is that the method doesn't need to find corners in a high-dimensional output space. Instead, it analyzes each output coordinate separately as a scalar function.
When you interpolate between two input points and in the high-dimensional input space via , each output coordinate becomes a scalar function of . So if your network has 10 outputs, you get 10 separate piecewise linear functions, each of which can be analyzed for critical points.
The critical points (where ReLUs toggle) affect all output coordinates simultaneously. When a neuron in some hidden layer switches from active to inactive (or vice versa) as you move along the interpolation path, this creates a sharp corner in the piecewise linear function for each output coordinate. The slopes before and after this point will be different for each output, but the location where the corner occurs is the same across all outputs.
The paper's algorithm finds these corners by:
- Measuring slopes at different points along the interpolation
- Finding where slopes change (indicating a corner)
- Mapping back from to the actual input point
This approach works because ReLU networks are piecewise linear - the high-dimensional output function is linear within each region where all neurons maintain their active/inactive states, with sharp transitions only at the boundaries where neurons toggle.
Wait, why do the critical points where the ReLUs toggle affect all coordinates simultaneously?
Thinking Summary
The user is asking a good question. Let me think about this carefully. When a ReLU toggles in a hidden layer, it doesn't necessarily affect all output coordinates. In fact, it only affects the output coordinates that have a non-zero path from that particular neuron to the output. Let me think about this more carefully: 1. If a neuron in layer $i$ toggles from active to inactive (or vice versa), this changes the output of that neuron from some value to 0 (or from 0 to some value). 2. This change propagates through the network. The effect on output coordinate $j$ depends on the weights connecting that neuron (through all subsequent layers) to output $j$. 3. If the total weight along all paths from that neuron to output $j$ is zero, then toggling that neuron won't affect output $j$ at all. 4. So actually, a critical point for a neuron doesn't necessarily create a corner in all output coordinates - only in those outputs that have non-zero connection from that neuron. The paper's statement is imprecise here. What's actually happening is: - When you move along the interpolation path and a neuron toggles, this creates a critical point - This critical point occurs at the same $\lambda^*$ value for all outputs (because it's the same neuron toggling) - But whether this creates a visible corner in each output function depends on whether that neuron influences that output So the correct statement would be: the location $\lambda^*$ where a neuron toggles is the same across all outputs, but not all outputs will necessarily show a corner at that point. Only outputs that are influenced by that neuron will show a corner.
You're absolutely right to question this - I was imprecise. A critical point (where a ReLU toggles) doesn't necessarily affect all output coordinates.
Here's what actually happens:
When a neuron in some hidden layer toggles at position along the interpolation path, this creates a critical point at that specific value. However, whether this creates a visible corner in each output coordinate depends on the weights connecting that neuron to each output.
For output coordinate :
- If there's a non-zero path of weights from the toggling neuron to output , then will have a corner at
- If all paths from that neuron to output have zero total weight, then remains smooth at
So the critical point occurs at the same location for all outputs (since it's the same neuron toggling), but only manifests as a corner in outputs that are actually influenced by that neuron.
The paper's approach still works because:
- Most neurons in a trained network influence at least some outputs
- By analyzing multiple random interpolation paths, they collect critical points for different neurons
- Even if some outputs don't show corners for certain neurons, enough output coordinates will show corners to detect the critical points
This is why the paper can work with either single or multiple outputs - they just need at least one output coordinate that shows a corner to detect each critical point.