Dror Baron -
Other Compressed Sensing Works
Compressed sensing (CS) is a new framework for integrated sensing and
compression. The fundamental revelation is that, if an N-sample signal
x is sparse and has a good K-term approximation in some basis, then
it can be reconstructed using M =O(K log(N/K)) << N linear projections of x
onto another basis.
Furthermore, x can be reconstructed using linear
programming, which has polynomial complexity.
Some of the CS projects I have worked on are described here, and
links to numerous other papers appear on the
blog and the
compressed sensing resource page.
This webpage presents individual research projects in CS unrelated to the rest of my work.
Secrecy properties of CS:
Several papers mention the possibility that CS measurements are encrypted.
and I have investigated this claim. We consider a scenario where Alice has
a secret message (in our model the message is a real-valued K-sparse signal)
that she would like to share with Bob. She encodes this signal using an M
by N Gaussian encoding matrix. Bob receives the measurements, and can decode
the signal, because he knows the encoding matrix (in practice, Alice and Bob
could share the seed of a random number generator used to produce
the matrix). Can an eavesdropper (Eve), who intercepts the measurements,
recover the signal without knowing the encoding matrix? We evaluate this question
using two well-established approaches to encryption: information-theoretic and computational.
First, we consider the stronger information-theoretic notion of perfect secrecy.
This notion requires the mutual information between
signals and measurements to be zero. However, the signals and measurements are
statistically dependent, which rules out perfect secrecy.
Second, we consider the weaker notion of computational secrecy, which means that
Eve can only recover the signal with a prohibitively
large computational cost. We prove that CS achieves a computational notion of secrecy
in the face of an adversary attempting to use either ell_0 or ell_1 minimization
(combinatorial search and linear programming, respectively).
Our result hinges on a theorem that shows that with probability one Eve will
recover an M-sparse explanation instead of a K-sparse
explanation when using the wrong encoding matrix.
Surprisingly, we have received comments about this result linking
it to other research areas. Igor Carron offers an interesting
Y. Rachlin and
"The Secrecy of Compressive Sensing Measurements,"
Proceedings of the 46th Allerton Conference on Communication,
Control, and Computing,
Monticello, IL, September 2008
In some engineering systems, a pattern of faults (zeros and ones) is measured
by a linear superposition of fault signatures. This superposition problem resembles
compressed sensing. We have modeled a simple setting where the signatures are
themselves sparse. In our work, we performed a detailed numerical evaluation of
multiple algorithms that have been proposed for fault identification and related
problems. For the most part, the prior art either assumes sparsity (in compressed
sensing) or the Bernoulli nature of the input (in communication systems).
By incorporating both sparsity and the Bernoulli nature of the input, our
belief propagation solver provides superior estimation of the fault patterns.
The great performance of our algorithm is explained by noting that belief
propagation approaches the fundamental information theoretic limits
on signal reconstruction.
Feel free to download the
H. Avissar, and
"Fault Identification via Non-parametric Belief Propagation,"
IEEE Transactions on Signal Processing
vol. 59, no. 6, pp. 2602-2613, June 2011
Back to my homepage.
Last updated June 2014.