[ Information ] [ Publications ] [Signal processing codes] [ Signal & Image Links ]
[ Main blog: A fortunate hive ] [ Blog: Information CLAde ] [ Personal links ]
[ SIVA Conferences ] [ Other conference links ] [ Journal rankings ]
[ Tutorial on 2D wavelets ] [ WITS: Where is the starlet? ]
If you cannot find anything more, look for something else (Bridget Fountain)
Si vous ne trouvez plus rien, cherchez autre chose (Brigitte Fontaine)
Google
 
Web www.laurent-duval.eu lcd.siva.free.fr
Update: 2013/12/31

Companion page

To the geophysical signal processing paper: Adaptive multiple subtraction with wavelet-based complex unary Wiener filters, Sergi Ventosa, Sylvain Le Roy, Irène Huard, Antonio Pica, Hérald Rabeson, Patrice Ricarte, Laurent Duval, Geophysics, number 77, vol. 183, p. 183-192, November-December 2012 [local PDF][arxiv (color version)][html][Geophysics PDF][Geophysics PDF plus][GeoScienceWorld] (Cites by Google Scholar > 4]

Abstract

Adaptive subtraction is a key element in predictive multiple-suppression methods. It minimizes misalignments and amplitude differences between modeled and actual multiples, and thus reduces multiple contamination in the dataset after subtraction. Due to the high cross-correlation between their waveform, the main challenge resides in attenuating multiples without distorting primaries. As they overlap on a wide frequency range, we split this wide-band problem into a set of more tractable narrow-band filter designs, using a 1D complex wavelet frame. This decomposition enables a single-pass adaptive subtraction via complex, single-sample (unary) Wiener filters, consistently estimated on overlapping windows in a complex wavelet transformed domain. Each unary filter compensates amplitude differences within its frequency support, and can correct small and large misalignment errors through phase and integer delay corrections. This approach greatly simplifies the matching filter estimation and, despite its simplicity, narrows the gap between 1D and standard adaptive 2D methods on field data.

References list on multiple removal

Adaptive multiple subtraction with wavelet-based complex unary
Wiener filters

Notes

The paper extends Complex wavelet adaptive multiple subtraction with unary filters (Earthdoc), Sergi Ventosa, Hérald Rabeson, Patrice Ricarte, Laurent Duval, EAGE 2011, June 2011.
Reducing multiple contamination (Verschuur and Berkhout, 1992; Matson and Dragoset, 2005) represents one of the greatest challenges in seismic processing. Two major aspects differentiate multiples and primary reflections: (1) the velocity of primaries is greater than that of multiples, and (2) multiples are periodic events in contrast with primaries. We can hence classify multiple attenuation methods into two broad categories: filtering methods that find a differentiating feature in the primary and the multiple (Kelamis et al., 2008); and predictive suppression methods that first predict and then subtract the multiple events from original seismic data (Pica et al., 2005; Weisser et al., 2006). Alas, no single approach fits all scenarios. Most contractors thus propose an extensive portfolio of demultiple algorithms that, in practice, may be combined and cascaded to obtain acceptable solutions. Predictive multiple suppression methods consist of two main elements: prediction and subtraction. Prediction builds a multiple estimation from the primaries using prior knowledge. Subtraction minimizes amplitude difference and small misalignment between actual multiple events and their predicted models, to maximize multiple attenuation in the input dataset. The efficiency of this suppression strongly depends on the adaptation capability of the matching filter employed in the subtraction element. In the following we will focus on the enhancement of the latter key element. Primaries and multiples are not fully uncorrelated, as they are generated from the same source. This poses a major challenge in the design of an optimal matching filter to minimize the multiple events in the input dataset from their predicted model. Slight differences in their spectra may be exploited with wavelet-based approaches (Pokrovskaia and Wombell, 2004; Ahmed et al., 2007; Neelamani et al., 2008). The present paper follows a similar trail, with a twist towards filter adaptation.
It is followed by Unary adaptive subtraction of joint multiple models with complex wavelet frames, Sergi Ventosa, Sylvain Le Roy, Irène Huard, Antonio Pica, Hérald Rabeson, Laurent Duval, Proceedings of the SEG Annual Meeting, 2012 (poster)
Multiple attenuation is one of the greatest challenges in seismic processing. Due to the high cross-correlation between primaries and multiples, attenuating the latter without distorting the former is a complicated problem. We propose here a joint multiple model-based adaptive subtraction, using single-sample unary filters’ estimation in a complex wavelet transformed domain. The method offers more robustness to incoherent noise through redundant decomposition. It is first tested on synthetic data, then applied on real-field data, with a single-model adaptation and a combination of several multiple models.

Seismic multiple filtering references

QuickSearch:   Number of matching entries: 0.

Search Settings

    AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
    Abma, R., Kabir, N., Matson, K.H., Michell, S., Shaw, S.A. & McLain, B. Comparisons of adaptive subtraction methods for multiple attenuation 2005 The Leading Edge
    Vol. 24, pp. 277-280 
    article DOI  
    Abstract: Coherent noise may be removed from seismic data by first making an approximate model of the noise, then producing an even better estimation of the noise by adaptively matching the modeled noise to the data. This modified model of the noise may then be subtracted from the data, eliminating most coherent noise. The success of this approach depends both on how well the initial model matches the true noise and the success of the adaptive matching in modifying the initial noise prediction to match the true noise. The adaptive matching step is complicated by the presence of the signal and other noise in the data. In this article, the noise of interest is surface-related multiples, although other types of coherent noise may be removed with this approach.
    BibTeX:
    @article{Abma_R_2005_j-tle_com_asmma,
      author = {Abma, R. and Kabir, N. and Matson, K. H. and Michell, S. and Shaw, S. A. and McLain, B.},
      title = {Comparisons of adaptive subtraction methods for multiple attenuation},
      journal = {The Leading Edge},
      year = {2005},
      volume = {24},
      pages = {277--280},
      doi = {http://dx.doi.org/10.1190/1.1895312}
    }
    
    Ahmed, I. 2D wavelet transform-domain adaptive subtraction for enhancing 3D SRME 2007
    Vol. 26(1), SEG Annual International Meeting, pp. 2490-2494 
    inproceedings DOI URL 
    Abstract: Surface related multiple elimination (SRME) (Berkhout
    1982, Verschuur et al., 1992) is a very popular and
    effective algorithm for removing surface related multiples.
    The SRME method includes two steps: multiple modeling
    or multiple prediction followed by adaptive subtraction.
    The success of the SRME method depends on how well the
    predicted multiples match to the actual multiples in the data
    and on the success of the adaptive subtraction algorithm.
    This paper deals with the adaptive subtraction algorithm.
    There are two most common strategies for adaptive
    subtraction. The first strategy is posed as the least-squares
    minimization problem that minimizes the energy difference
    between the original data and the predicted multiples in the
    x-t domain. The second strategy is based on pattern-based
    adaptive subtraction (Spitz, 1999, 2000, Soubaras, 1994),
    which is based on the assumption that the primaries and
    multiples are predictable in the f-x domain. A detailed
    comparison study of different adaptive subtraction
    algorithms is discussed in the paper from Abma et al.
    (2005). One of the main conclusion from the paper was that
    the least-squares minimization technique is probably the
    best available adaptive subtraction algorithm at present,
    however when multiples strongly interfere with the
    primaries the technique is not as effective. This conclusion
    is the main motivation for this paper. The transformation of
    the data to the 2D stationary wavelet transform domain
    (SWT) (Nason et al., 1995) provides a potential dip
    separation to the data and thus gives an opportunity to
    separate interfering events. In this paper the
    implementation of the least-squares minimization technique
    in the 2D SWT domain is discussed.
    BibTeX:
    @inproceedings{Ahmed_I_2007_p-seg_2d_wtdase3dsrme,
      author = {Imtiaz Ahmed},
      title = {2D wavelet transform-domain adaptive subtraction for enhancing 3D SRME},
      booktitle = {Annual International Meeting},
      publisher = {Soc. Expl. Geophysicists},
      year = {2007},
      volume = {26},
      number = {1},
      pages = {2490--2494},
      url = {http://link.aip.org/link/?SGA/26/2490/1},
      doi = {http://dx.doi.org/10.1190/1.2792984}
    }
    
    Berkhout, A.J. & Verschuur, D.J. Focal transformation, an imaging concept for signal restoration and noise removal 2006 Geophysics
    Vol. 71(6), pp. A55-A59 
    article DOI URL 
    Abstract: Interpolation of data beyond aliasing limits and removal of noise that occurs within the seismic bandwidth are still important problems in seismic processing. The focal transform is introduced as a promising tool in data interpolation and noise removal, allowing the incorporation of macroinformation about the involved wavefields. From a physical point of view, the principal action of the forward focal operator is removing the spatial phase of the signal content from the input data, and the inverse focal operator restores what the forward operator has removed. The strength of the method is that in the transformed domain, the focused signals at the focal area can be separated from the dispersed noise away from the focal area. Applications of particular interest in preprocessing are interpolation of missing offsets and reconstruction of signal beyond aliasing. The latter can be seen as the removal of aliasing noise.
    BibTeX:
    @article{Berkhout_A_2006_j-geophysics_foc_ticsrnr,
      author = {A. J. Berkhout and D. J. Verschuur},
      title = {Focal transformation, an imaging concept for signal restoration and noise removal},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2006},
      volume = {71},
      number = {6},
      pages = {A55--A59},
      url = {http://link.aip.org/link/?GPY/71/A55/1},
      doi = {http://dx.doi.org/10.1190/1.2356996}
    }
    
    Beylkin, G. & Vassiliou, A. Fast Radon transform for multiple attenuation 1998
    Vol. 17(1) SEG Annual International Meeting, pp. 1351-1352 
    inproceedings DOI URL 
    Abstract: Attenuation of multiple reflections is a significant problem in
    seismic data processing. One of the most frequently used techniques
    for this problem is the parabolic Radon transform. The
    main point of the procedure is that multiple reflections can be
    filtered out in a relatively simple manner in the Radon domain.
    The inverse Radon transform then reconstructs the original
    data with attenuated multiple reflections. The parabolic Radon
    transform is an instance of the Discrete Radon Transform for
    which there is an algorithm for computing the inverse (U.S.
    patent 4,760,563, Beylkin, 1987, Thorson and Claerbout,
    1985, Hampson, 1986). The algorithm suggested in U.S.
    patent 4,760,563 and Beylkin 1987) for computing Discrete
    Radon transform in fast in time $O(N_f log N_f )$ but is slow
    $O(N^2)$ in the number of traces with the total complexity
    $O(N^2 N_f log N_f)$, where $N_f$ is the number of the frequencies
    and $N$ is the number of traces.
    BibTeX:
    @inproceedings{Beylkin_G_1998_p-seg_fas_rtma,
      author = {G. Beylkin and A. Vassiliou},
      title = {Fast Radon transform for multiple attenuation},
      booktitle = {Annual International Meeting},
      publisher = {Soc. Expl. Geophysicists},
      year = {1998},
      volume = {17},
      number = {1},
      pages = {1351--1352},
      url = {http://link.aip.org/link/?SGA/17/1351/1},
      doi = {http://dx.doi.org/10.1190/1.1820153}
    }
    
    Buttkus, B. Homomorphic Filtering --- Theory And Practice 1975 Geophys. Prospect.
    Vol. 23(4), pp. 712-748 
    article DOI URL 
    Abstract: The application of homomorphic filtering in marine seismic reflection work is investigated with the aims to achieve the estimation of the basic wavelet, the wavelet deconvolution and the elimination of multiples. Each of these deconvolution problems can be subdivided into two parts: The first problem is the detection of those parts in the cepstrum which ought to be suppressed in processing. The second part includes the actual filtering process and the problem of minimizing the random noise which generally is enhanced during the homomorphic procedure.The application of homomorphic filters to synthetic seismograms and air-gun measurements shows the possibilities for the practical application of the method as well as the critical parameters which determine the quality of the results. These parameters are: * a) the signal-to-noise ratio (SNR) of the input data * b) the window width and the cepstrum components for the separation of the individual parts * c) the time invariance of the signal in the trace.In the presence of random noise the power cepstrum is most efficient for the detection of wavelet arrival times. For wavelet estimation, overlapping signals can be detected with the power cepstrum up to a SNR of three. In comparison with this, the detection of long period multiples is much more complicated. While the exact determination of the water reverberation arrival times can be realized with the power cepstrum up to a multiples-to-primaries ratio of three to five, the detection of the internal multiples is generally not possible, since for these multiples this threshold value of detectibility and arrival time determination is generally not realized.For wavelet estimation, comb filtering of the complex cepstrum is most valuable. The wavelet estimation gives no problems up to a SNR of ten. Even in the presence of larger noise a reasonable estimation can be obtained up to a SNR of five by filtering the phase spectrum during the computation of the complex cepstrum. In contrast to this, the successful application of the method for the multiple reduction is confined to a SNR of ten, since the filtering of the phase spectrum for noise reduction cannot be applied. Even if the threshold results are empirical, they show the limits fór the successful application of the method.
    BibTeX:
    @article{Buttkus_B_1975_j-geophys-prospect_hom_ftp,
      author = {Buttkus, B.},
      title = {Homomorphic Filtering --- Theory And Practice},
      journal = {Geophys. Prospect.},
      publisher = {Blackwell Publishing Ltd},
      year = {1975},
      volume = {23},
      number = {4},
      pages = {712--748},
      url = {http://dx.doi.org/10.1111/j.1365-2478.1975.tb01555.x},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1975.tb01555.x}
    }
    
    Donno, D., Chauris, H. & Noble, M. Curvelet-based multiple prediction 2010 Geophysics
    Vol. 75(6), pp. WB255-WB263 
    article DOI URL 
    Abstract: The suppression of multiples is a crucial task when processing seismic reflection data. Using the curvelet transform for surface-related multiple prediction is investigated. From a geophysical point of view, a curvelet can be seen as the representation of a local plane wave and is particularly well suited for seismic data decomposition. For the prediction of multiples in the curvelet domain, first it is proposed to decompose the input data into curvelet coefficients. These coefficients are then convolved together to predict the coefficients associated with multiples, and the final result is obtained by applying the inverse curvelet transform. The curvelet transform offers two advantages. The directional characteristic of curvelets allows for exploitation of Snell's law at the sea surface. Moreover, the possible aliasing in the predicted multiple is better managed by using the curvelet multiscale property to weight the prediction according to the low-frequency part of the data. 2D synthetic and field data examples show that some artifacts and aliasing effects are indeed reduced in the multiple prediction with the use of curvelets, thus allowing for an improved multiple subtraction result.
    BibTeX:
    @article{Donno_D_2010_j-geophysics_cur_bmp,
      author = {Donno, D. and Chauris, H. and Noble, M.},
      title = {Curvelet-based multiple prediction},
      journal = {Geophysics},
      year = {2010},
      volume = {75},
      number = {6},
      pages = {WB255--WB263},
      url = {http://link.aip.org/link/?GPY/75/WB255/1},
      doi = {http://dx.doi.org/10.1190/1.3502663}
    }
    
    Dragoset, B., Verschuur, E., Moore, I. & Bisley, R. A perspective on 3D surface-related multiple elimination 2010 Geophysics
    Vol. 75(5), pp. 75A245-75A261 
    article DOI URL 
    Abstract: Surface-related multiple elimination (SRME) is an algorithm that predicts all surface multiples by a convolutional process applied to seismic field data. Only minimal preprocessing is required. Once predicted, the multiples are removed from the data by adaptive subtraction. Unlike other methods of multiple attenuation, SRME does not rely on assumptions or knowledge about the subsurface, nor does it use event properties to discriminate between multiples and primaries. In exchange for this "freedom from the subsurface," SRME requires knowledge of the acquisition wavelet and a dense spatial distribution of sources and receivers. Although a 2D version of SRME sometimes suffices, most field data sets require 3D SRME for accurate multiple prediction. All implementations of 3D SRME face a serious challenge: The sparse spatial distribution of sources and receivers available in typical seismic field data sets does not conform to the algorithmic requirements. There are several approaches to implementing 3D SRME that address the data sparseness problem. Among those approaches are pre-SRME data interpolation, on-the-fly data interpolation, zero-azimuth SRME, and true-azimuth SRME. Field data examples confirm that (1) multiples predicted using true-azimuth 3D SRME are more accurate than those using zero-azimuth 3D SRME and (2) on-the-fly interpolation produces excellent results.
    BibTeX:
    @article{Dragoset_B_2010_j-geophysics_per_3dsrme,
      author = {Bill Dragoset and Eric Verschuur and Ian Moore and Richard Bisley},
      title = {A perspective on 3D surface-related multiple elimination},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2010},
      volume = {75},
      number = {5},
      pages = {75A245--75A261},
      url = {http://link.aip.org/link/?GPY/75/75A245/1},
      doi = {http://dx.doi.org/10.1190/1.3475413}
    }
    
    Fomel, S. Adaptive multiple subtraction using regularized nonstationary regression 2009 Geophysics
    Vol. 74(1), pp. V25-V33 
    article DOI URL 
    Abstract: Stationary regression is the backbone of seismic data-processing algorithms including match filtering, which is commonly applied for adaptive multiple subtraction. However, the assumption of stationarity is not always adequate for describing seismic signals. I have developed a general method of nonstationary regression and that applies to nonstationary match filtering. The key idea is the use of shaping regularization to constrain the variability of nonstationary regression coefficients. Simple computational experiments demonstrate advantages of shaping regularization over classic Tikhonov's regularization, including a more intuitive selection of parameters and a faster iterative convergence. Using benchmark synthetic data examples, I have successfully applied this method to the problem of adaptive subtraction of multiple reflections.
    BibTeX:
    @article{Fomel_S_2009_j-geophysics_ada_msrnr,
      author = {Sergey Fomel},
      title = {Adaptive multiple subtraction using regularized nonstationary regression},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2009},
      volume = {74},
      number = {1},
      pages = {V25-V33},
      url = {http://link.aip.org/link/?GPY/74/V25/1},
      doi = {http://dx.doi.org/10.1190/1.3043447}
    }
    
    Gilloire, A. & Vetterli, M. Adaptive filtering in sub-bands with critical sampling: analysis, experiments, and application to acoustic echo cancellation 1992 IEEE Trans. Signal Process.
    Vol. 40(8), pp. 1862-1875 
    article DOI  
    Abstract: An exact analysis of the critically subsampled two-band modelization scheme is given, and it is demonstrated that adaptive cross-filters between the subbands are necessary for modelization with small output errors. It is shown that perfect reconstruction filter banks can yield exact modelization. These results are extended to the critically subsampled multiband schemes, and important computational savings are seen to be achieved by using good quality filter banks. The problem of adaptive identification in critically subsampled subbands is considered and an appropriate adaptation algorithm is derived. The authors give a detailed analysis of the computational complexity of all the discussed schemes, and experimentally verify the theoretical results that are obtained. The adaptive behavior of the subband schemes that were tested is discussed
    Review: REVIEWED
    BibTeX:
    @article{Gilloire_V_1992_j-ieee-tsp_ada_fsbcsaeaaec,
      author = {Gilloire, A. and Vetterli, M.},
      title = {Adaptive filtering in sub-bands with critical sampling: analysis, experiments, and application to acoustic echo cancellation},
      journal = {IEEE Trans. Signal Process.},
      year = {1992},
      volume = {40},
      number = {8},
      pages = {1862--1875},
      doi = {http://dx.doi.org/10.1109/78.149989}
    }
    
    Guitton, A. A pattern-based approach for multiple removal applied to a 3D Gulf of Mexico data set 2004 Geophys. Prospect.
    Vol. 54(2), pp. 135-152 
    article  
    Abstract: Surface-related multiples are attenuated for one sail line and one streamer of a 3D data set (courtesy of Compagnie Générale de Géophysique). The survey was carried out in the Gulf of Mexico in the Green Canyon area where salt intrusions close to the water-bottom are present. Because of the complexity of the subsurface, a wavefield method incorporating the full 3D volume of the data for multiple removal is necessary. This method comprises modelling of the multiples, where the data are used as a prediction operator, and a subtraction step, where the model of the multiples is adaptively removed from the data with matching filters. The accuracy of the multiple model depends on the source/receiver coverage at the surface. When this coverage is not dense enough, the multiple model contains errors that make successful subtraction more difficult. In these circumstances, one can either (1) improve the modelling step by interpolating the missing traces, (2) improve the subtraction step by designing methods that are less sensitive to modelling errors, or (3) both. For this data set, the second option is investigated by predicting the multiples in a 2D sense (as opposed to 3D) and performing the subtraction with a pattern-based approach. Because some traces and shots are missing for the 2D prediction, the data are interpolated in the in-line direction using a hyperbolic Radon transform with and without sparseness constraints. The interpolation with a sparseness constraint yields the best multiple model. For the subtraction, the pattern-based technique is compared with a more standard, adaptive-subtraction scheme. The pattern-based approach is based on the estimation of 3D prediction-error filters for the primaries and the multiples, followed by a least-squares estimation of the primaries. Both methods are compared before and after prestack depth migration. These results suggest that, when the multiple model is not accurate, the pattern-based method is more effective than adaptive subtraction at removing surface-related multiples while preserving the primaries.
    BibTeX:
    @article{Guitton_A_2004_j-geophys-prospect_pat_bamra3dgmds,
      author = {Guitton, A.},
      title = {A pattern-based approach for multiple removal applied to a 3D Gulf of Mexico data set},
      journal = {Geophys. Prospect.},
      year = {2004},
      volume = {54},
      number = {2},
      pages = {135--152}
    }
    
    Guitton, A. & Verschuur, D.J. Adaptive subtraction of multiples using the $L_1$-norm 2004 Geophys. Prospect.
    Vol. 52, pp. 27-38 
    article  
    Abstract: A strategy for multiple removal consists of estimating a model of the multiples and
    then adaptively subtracting this model from the data by estimating shaping filters. A
    possible and efficient way of computing these filters is by minimizing the difference
    or misfit between the input data and the filtered multiples in a least-squares sense.
    Therefore, the signal is assumed to have minimum energy and to be orthogonal to the
    noise. Some problems arise when these conditions are not met. For instance, for strong
    primaries with weak multiples,we might fit the multiple model to the signal (primaries)
    and not to the noise (multiples). Consequently, when the signal does not exhibit
    minimum energy, we propose using the L1-norm, as opposed to the L2-norm, for the
    filter estimation step. This choice comes from the well-known fact that the L1-norm
    is robust to 'large' amplitude differences when measuring data misfit. The L1-norm is
    approximated by a hybrid L1/L2-norm minimized with an iteratively reweighted leastsquares
    (IRLS) method. The hybrid norm is obtained by applying a simple weight
    to the data residual. This technique is an excellent approximation to the L1-norm.
    We illustrate our method with synthetic and field data where internal multiples are
    attenuated. We show that the L1-norm leads to much improved attenuation of the
    multiples when the minimum energy assumption is violated. In particular, the multiple
    model is fitted to the multiples in the data only, while preserving the primaries.
    BibTeX:
    @article{Guitton_A_2004_j-geophys-prospect_ada_smul1n,
      author = {A. Guitton and D. J. Verschuur},
      title = {Adaptive subtraction of multiples using the $L_1$-norm},
      journal = {Geophys. Prospect.},
      year = {2004},
      volume = {52},
      pages = {27--38}
    }
    
    Hampson, D. Inverse velocity stacking for multiple elimination 1986
    Vol. 5(1)Annual International Meeting, pp. 422-424 
    inproceedings DOI URL 
    BibTeX:
    @inproceedings{Hampson_D_1986_p-seg_inv_vsme,
      author = {Dan Hampson},
      title = {Inverse velocity stacking for multiple elimination},
      booktitle = {Annual International Meeting},
      publisher = {Soc. Expl. Geophysicists},
      year = {1986},
      volume = {5},
      number = {1},
      pages = {422--424},
      url = {http://link.aip.org/link/?SGA/5/422/1},
      doi = {http://dx.doi.org/10.1190/1.1893060}
    }
    
    Herrmann, F.J. & Verschuur, D.J. Robust Curvelet-Domain Primary-Multiple Separation with Sparseness Constraints 2005 Proc. EAGE Conf. Tech. Exhib.  inproceedings  
    Abstract: A non-linear primary-multiple separation method using curvelets frames is presented. The advantage of this method is that curvelets arguably provide an optimal sparse representation for both primaries and multiples. As such curvelets frames are ideal candidates to separate primariesfrom multiples given inaccurate predictions for these two data components. The method derives its robustness regarding the presence of noise;errors in the prediction and missing data from the curvelet frame's ability (i) to represent both signal components with a limited numberof multi-scale and directional basis functions; (ii) to separate the components on the basis of differences in location, orientation and scales and (iii) to minimize correlations between the coefficients of the two components. A brief sketch of the theory is provided as well as a number of examples on synthetic and real data.
    BibTeX:
    @inproceedings{Herrmann_F_2005_p-eage_rob_cdpmssc,
      author = {Herrmann, F. J. and Verschuur, D. J.},
      title = {Robust Curvelet-Domain Primary-Multiple Separation with Sparseness Constraints},
      booktitle = {Proc. EAGE Conf. Tech. Exhib.},
      publisher = {European Assoc. Geoscientists Eng.},
      year = {2005}
    }
    
    Herrmann, F.J. & Verschuur, D.J. Curvelet imaging and processing: adaptive multiple elimination 2004 Proc. CSEG Nat. Conv.  inproceedings  
    Abstract: Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatial-temporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
    BibTeX:
    @inproceedings{Herrmann_F_2004_p-cseg_cur_ipame,
      author = {Herrmann, F. J. and Verschuur, D. J.},
      title = {Curvelet imaging and processing: adaptive multiple elimination},
      booktitle = {Proc. CSEG Nat. Conv.},
      publisher = {Canadian Soc. Expl. Geophysicists},
      year = {2004}
    }
    
    Herrmann, P., Mojesky, T., Magesan, T. & Hugonnet, P. De-aliased, high-resolution Radon transforms 2000
    Vol. 19(1) SEG Annual International Meeting, pp. 1953-1956 
    inproceedings DOI URL 
    Abstract: Multiple elimination methods based on the move-out discrimination between primaries and multiples rely heavily on the focusing of seismic events in the parabolic Radon domain. This focusing, however, is affected both by the finite spatial aperture and sampling of the data. As a consequence of the resulting smearing, multiple energy may be mapped into the primary model and conversely primary energy may be mapped into the multiple model. This leads to poor multiple removal and to the nonpreservation of the primary amplitudes. To overcome these pitfalls one has to make use of De-aliased, High-Resolution Radon transforms. High-resolution Radon transforms have already been proposed by some authors. Here we present a novel approach that simultaneously tackles the aliasing and resolution issues in a non-iterative way.
    BibTeX:
    @inproceedings{Herrmann_P_2000_p-seg_de_ahrrt,
      author = {Herrmann, P. and Mojesky, T. and Magesan, T. and Hugonnet, P.},
      title = {De-aliased, high-resolution Radon transforms},
      booktitle = {Annual International Meeting},
      publisher = {Soc. Expl. Geophysicists},
      year = {2000},
      volume = {19},
      number = {1},
      pages = {1953--1956},
      url = {http://link.aip.org/link/?SGA/19/1953/1},
      doi = {http://dx.doi.org/10.1190/1.1815818}
    }
    
    de Hoop, M.V., Smith, H., Uhlmann, G. & van der Hilst, R.D. Seismic imaging with the generalized Radon transform: a curvelet transform perspective 2009 Inverse Problems
    Vol. 25(2), pp. 025005 (21pp) 
    article URL 
    Abstract: A key challenge in the seismic imaging of reflectors using surface reflection data is the subsurface illumination produced by a given data set and for a given complexity of the background model (of wave speeds). The imaging is described here by the generalized Radon transform. To address the illumination challenge and enable (accurate) local parameter estimation, we develop a method for partial reconstruction. We make use of the curvelet transform, the structure of the associated matrix representation of the generalized Radon transform, which needs to be extended in the presence of caustics and phase linearization. We pair an image target with partial waveform reflection data, and develop a way to solve the matrix normal equations that connect their curvelet coefficients via diagonal approximation. Moreover, we develop an approximation, reminiscent of Gaussian beams, for the computation of the generalized Radon transform matrix elements only making use of multiplications and convolutions, given the underlying ray geometry; this leads to computational efficiency. Throughout, we exploit the (wave number) multi-scale features of the dyadic parabolic decomposition underlying the curvelet transform and establish approximations that are accurate for sufficiently fine scales. The analysis we develop here has its roots in and represents a unified framework for (double) beamforming and beam-stack imaging, parsimonious pre-stack Kirchhoff migration, pre-stack plane-wave (Kirchhoff) migration and delayed-shot pre-stack migration.
    BibTeX:
    @article{DeHoop_M_2009_j-inv-prob_sei_igrtctp,
      author = {de Hoop, M. V. and H. Smith and G. Uhlmann and van der Hilst, R. D.},
      title = {Seismic imaging with the generalized Radon transform: a curvelet transform perspective},
      journal = {Inverse Problems},
      year = {2009},
      volume = {25},
      number = {2},
      pages = {025005 (21pp)},
      url = {http://stacks.iop.org/0266-5611/25/025005}
    }
    
    Huo, S. & Wang, Y. Improving adaptive subtraction in seismic multiple attenuation 2009 Geophysics
    Vol. 74(4), pp. 59-67 
    article DOI  
    Abstract: In seismic multiple attenuation, once the multiple models have been built, the effectiveness of the processing depends on the subtraction step. Usually the primary energy is partially attenuated during the adaptive subtraction if an L2-norm matching filter is used to solve a least-squares problem. The expanded multichannel matching (EMCM) filter generally is effective, but conservative parameters adopted to preserve the primary could lead to some remaining multiples. We have managed to improve the multiple attenuation result through an iterative application of the EMCM filter to accumulate the effect of subtraction. A Butterworth-type masking filter based on the multiple model can be used to preserve most of the primary energy prior to subtraction, and then subtraction can be performed on the remaining part to better suppress the multiples without affecting the primaries. Meanwhile, subtraction can be performed according to the orders of the multiples, as a single subtraction window usually covers different-order multiples with different amplitudes. Theoretical analyses, and synthetic and real seismic data set demonstrations, proved that a combination of these three strategies is effective in improving the adaptive subtraction during seismic multiple attenuation.
    BibTeX:
    @article{Huo_S_2009_j-geophysics_imp_assma,
      author = {Huo, S. and Wang, Y.},
      title = {Improving adaptive subtraction in seismic multiple attenuation},
      journal = {Geophysics},
      year = {2009},
      volume = {74},
      number = {4},
      pages = {59--67},
      doi = {http://dx.doi.org/10.1190/1.3122408}
    }
    
    Jacques, L., Duval, L., Chaux, C. & Peyré, G. A panorama on multiscale geometric representations, intertwining spatial, directional and frequency selectivity 2011 Signal Process.
    Vol. 91(12), pp. 2699-2730 
    article DOI URL 
    Abstract: The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share a hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multi-orientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to non-Euclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping "pictures". We hope that this paper will contribute to the appreciation and apprehension of a stream of current research directions in image understanding.
    BibTeX:
    @article{Jacques_L_2011_j-sp_pan_mgrisdfs,
      author = {L. Jacques and L. Duval and C. Chaux and G. Peyré},
      title = {A panorama on multiscale geometric representations, intertwining spatial, directional and frequency selectivity},
      journal = {Signal Process.},
      year = {2011},
      volume = {91},
      number = {12},
      pages = {2699--2730},
      url = {http://www.sciencedirect.com/science/article/B6V18-52RR4XP-3/2/79bc3e4da8e86b67450190704480d111},
      doi = {DOI: 10.1016/j.sigpro.2011.04.025}
    }
    
    Jorgensen, P.E.T. & Song, M.-S. Comparison of Discrete and Continuous Wavelet Transforms 2009
    Vol. 1-10 
    incollection  
    Abstract: Encyclopedia of Complexity and Systems Science provides an authoritative single source for understanding and applying the concepts of complexity theory together with the tools and measures for analyzing complex systems in all fields of science and engineering. The science and tools of complexity and systems science include theories of self-organization, complex systems, synergetics, dynamical systems, turbulence, catastrophes, instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. Examples of near-term problems and major unknowns that can be approached through complexity and systems science include: The structure, history and future of the universe; the biological basis of consciousness; the integration of genomics, proteomics and bioinformatics as systems biology; human longevity limits; the limits of computing; sustainability of life on earth; predictability, dynamics and extent of earthquakes, hurricanes, tsunamis, and other natural disasters; the dynamics of turbulent flows; lasers or fluids in physics, microprocessor design; macromolecular assembly in chemistry and biophysics; brain functions in cognitive neuroscience; climate change; ecosystem management; traffic management; and business cycles. All these seemingly quite different kinds of structure formation have a number of important features and underlying structures in common. These deep structural similarities can be exploited to transfer analytical methods and understanding from one field to another. This unique work will extend the influence of complexity and system science to a much wider audience than has been possible to date.
    BibTeX:
    @incollection{Jorgensen_P_2009_incoll_com_dcwt,
      author = {Jorgensen, P. E. T. and Song, M.-S.},
      title = {Comparison of Discrete and Continuous Wavelet Transforms},
      publisher = {Springer},
      year = {2009},
      volume = {1-10}
    }
    
    Kabir, N. & Verschuur, D.J. Expert answers: Does Parabolic Radon transform multiple removal hurt amplitudes for AVO analysis? 2007 CSEG Recorder, pp. 10-14  article  
    Abstract: Multiples are a menace and their elimination from the seismic data presents a real challenge to the seismic
    processors. One of the commonly used tools in the processor?s arsenal is the Radon Transform. However, when
    it comes to removal of multiples before AVO analysis, reservations are usually expressed for application of
    this transform, in that it hurts the amplitudes, especially on the near traces in the gather. This apprehension
    has been cast in the form of the following question and answers were sought from two well-known experts in
    this area, namely Nurul Kabir (BP) and Eric Verschuur (Delft).
    The order of the responses given below is the order in which we received them. We thank the experts for
    sending in their responses.
    BibTeX:
    @article{Kabir_N_2007_j-cseg-recorder_par_rtmrhaavoa,
      author = {Kabir, N. and Verschuur, D. J.},
      title = {Expert answers: Does Parabolic Radon transform multiple removal hurt amplitudes for AVO analysis?},
      journal = {CSEG Recorder},
      year = {2007},
      pages = {10--14}
    }
    
    Kaplan, S.T. & Innanen, K.A. Adaptive separation of free-surface multiples through independent component analysis 2008 Geophysics
    Vol. 73(3), pp. V29-V36 
    article DOI URL 
    Abstract: We present a three-stage algorithm for adaptive separation of free-surface multiples. The free-surface multiple elimination (FSME) method requires, as deterministic prerequisites, knowledge of the source wavelet and deghosted data. In their absence, FSME provides an estimate of free-surface multiples that must be subtracted adaptively from the data. First we construct several orders from the free-surface multiple prediction formula. Next we use the full recording duration of any given data trace to construct filters that attempt to match the data and the multiple predictions. This kind of filter produces adequate phase results, but the order-by-order nature of the free-surface algorithm brings results that remain insufficient for straightforward subtraction. Then we construct, trace by trace, a mixing model in which the mixtures are the data trace and its orders of multiple predictions. We separate the mixtures through a blind source separation technique, in particular by employing independent component analysis. One of the recovered signals is a data trace without free-surface multiples. This technique sidesteps the subtraction inherent in most adaptive subtraction methods by separating the desired signal from the free-surface multiples. The method was applied to synthetic and field data. We compared the field data to a published method and found comparable results.
    BibTeX:
    @article{Kaplan_S_2008_j-geophysics_ada_sfsmica,
      author = {Sam T. Kaplan and Kristopher A. Innanen},
      title = {Adaptive separation of free-surface multiples through independent component analysis},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2008},
      volume = {73},
      number = {3},
      pages = {V29--V36},
      url = {http://link.aip.org/link/?GPY/73/V29/1},
      doi = {http://dx.doi.org/10.1190/1.2890407}
    }
    
    Lin, D., Young, J., Huang, Y. & Hartmann, M. 3D SRME application in the Gulf of Mexico 2004 SEG Technical Program Expanded Abstracts
    Vol. 23(1)Annual International Meeting, pp. 1257-1260 
    inproceedings DOI URL 
    Abstract: The effective removal of surface multiples is critical for imaging subsalt structures in the deepwater Gulf of Mexico. The widely used 2D surface-related multiple elimination (SRME) is inadequate for rugose reflectors. We extend the SRME methodology to three dimensions through the construction of high density and wide azimuth data. We demonstrate the success of our method with results from a case study.
    BibTeX:
    @inproceedings{Lin_D_2004_p-seg_3d_srmeagm,
      author = {D. Lin and J. Young and Y. Huang and M. Hartmann},
      title = {3D SRME application in the Gulf of Mexico},
      booktitle = {Annual International Meeting},
      journal = {SEG Technical Program Expanded Abstracts},
      publisher = {Soc. Expl. Geophysicists},
      year = {2004},
      volume = {23},
      number = {1},
      pages = {1257--1260},
      url = {http://link.aip.org/link/?SGA/23/1257/1},
      doi = {http://dx.doi.org/10.1190/1.1851098}
    }
    
    Lines, L. Suppression of short-period multiples --- deconvolution or model-based inversion? 1996 J. Can. Explor. Geophys.
    Vol. 32, pp. 63-72 
    article  
    BibTeX:
    @article{Lines_L_1996_j-can-j-explor-geophys_sup_spmdmbi,
      author = {Lines, L.},
      title = {Suppression of short-period multiples --- deconvolution or model-based inversion?},
      journal = {J. Can. Explor. Geophys.},
      year = {1996},
      volume = {32},
      pages = {63--72}
    }
    
    Matson, K. & Dragoset, B. An introduction to this special section --- Multiple attenuation 2005 The Leading Edge
    Vol. 24, pp. 252 
    article  
    Abstract: The discipline of multiple attenuation continues to make
    significant steps forward in the ultimate goal of eliminating
    multiples from various types of seismic data. As a retrospective
    on this, we refer back to the last TLE special
    section on multiple attenuation (January 1999) and compare
    it to the papers in this issue and the recent literature. The
    1999 issue featured significant progress in the attenuation
    of multiples using wave-equation-based methods, which are
    popularly referred to as surface-related multiple elimination
    or SRME. A significant portion of that section was populated
    with papers from a multiple attenuation workshop held
    at SEG?s 1997 Annual Meeting in Dallas. At that workshop,
    there was a growing acceptance that SRME was not only a
    viable method, but that it could attenuate multiples that previous
    methods could not. This was, of course, the promise
    of theory borne out in practice; however, one important fly
    in the ointment was that all applications were limited to 2D.
    While hopeful, the tone at the end of the workshop was:
    What about 3D SRME?
    Since then, industry and academia have made significant
    progress in extending these methods to 3D. At last
    year?s SEG meeting in Denver, we witnessed the first ever
    session entirely devoted to 3D multiple prediction, with
    participants from across the industry taking part. Some of
    those papers appear in this issue. They herald not just how
    far we have come, but how far we have to go as well.
    While some hopes from the 1999 special section have
    come to fruition, others have not or, at least, have done so
    to a lesser degree. For example, that section also held out
    hope that in the future there would be a special section dedicated
    to not just removing multiples, but to using those multiples
    as signal to create useful subsurface images. Progress
    on that front has been slower than on the 3D attenuation
    problem. Only one paper here falls in that category.
    Since 1999 we have seen an increase in the use and acceptance
    of high-resolution Radon transform methods as a way
    to improve Radon multiple attenuation. Interestingly, no
    manuscripts were received on those advances, suggesting
    that acceptance of this method has already run its course.
    In this special section, we organized the papers into
    three groups. The first deals with advances made in extending
    surface multiple prediction to 3D. The second deals with
    the related problem of subtracting the predicted multiples
    from the actual multiples present in our data. This is important
    because often the subtraction methods can account for
    some of the practical limitations in the prediction process.
    The third group of papers deals with everything else, ranging
    from an innovative way to mitigate the generation of
    multiples in acquisition using an acoustic blanket to dampen
    the effect of the free-surface, to a method that images the
    subsurface using multiples from VSP data.
    What does the future hold for multiples? As in all scientific
    pursuits, new methods will come along that supersede
    the previous state of the art. We anticipate: - Continued investigation and progress on using multiples
    to augment or even replace images derived from primary
    reflections alone.
    - The advent of methods that deal successfully with the
    multiples associated with small-scale diffractions that
    continue to plague many datasets where the overburden
    is complex or rugose.
    - Interbed multiple attenuation methods that come into
    their own as the attenuation of surface multiples continues
    to improve.
    - The appearance of new methods that go beyond the current
    state of SRME practice.
    And, as a final word, who knows, maybe marine exploration
    will move into waters so deep that surface multiples
    will no longer be a concern for that type of data.
    BibTeX:
    @article{Matson_K_2005_j-tle_int_ssma,
      author = {Matson, K. and Dragoset, B.},
      title = {An introduction to this special section --- Multiple attenuation},
      journal = {The Leading Edge},
      year = {2005},
      volume = {24},
      pages = {252},
      note = {special section : Multiple attenuation}
    }
    
    Monk, D.J. Wave-equation multiple suppression using constrained gross-equalization 1993 Geophys. Prospect.
    Vol. 41(6), pp. 725-736 
    article DOI URL 
    Abstract: A method for improving the attenuation of water-layer multiple energy is suggested. The improvement is achieved using wave-equation extrapolation to generate an initial model of the multiple energy, and then constraining the way in which this model is modified to fit the observed multiple energy. Reconciling the initial multiple model with the input data is a critical part of this process and several techniques have been suggested previously by other authors. The approach used here is to fit the time, amplitude and phase of the wavelets by adapting the initial model trace using a weighted sum of four traces which can each be derived from the initial multiple model trace.Results on real data suggest that attenuation of primary energy is minimized using this technique, without diminishing the level of multiple attenuation.
    BibTeX:
    @article{Monk_D_1993_j-geophys-prospect_wav_emscge,
      author = {Monk, D. J.},
      title = {Wave-equation multiple suppression using constrained gross-equalization},
      journal = {Geophys. Prospect.},
      publisher = {Blackwell Publishing Ltd},
      year = {1993},
      volume = {41},
      number = {6},
      pages = {725--736},
      url = {http://dx.doi.org/10.1111/j.1365-2478.1993.tb00880.x},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1993.tb00880.x}
    }
    
    Neelamani, R. (N., Baumstein, A. & Ross, W.S. Adaptive subtraction using complex-valued curvelet transforms 2010 Geophysics
    Vol. 75(4), pp. V51-V60 
    article DOI URL 
    Abstract: We propose a complex-valued curvelet transform-based (CCT-based) algorithm that adaptively subtracts from seismic data those noises for which an approximate template is available. The CCT decomposes a geophysical data set in terms of small reflection pieces, with each piece having a different characteristic frequency, location, and dip. One can precisely change the amplitude and shift the location of each seismic reflection piece in a template by controlling the amplitude and phase of the template's CCT coefficients. Based on these insights, our approach uses the phase and amplitude of the data's and template's CCT coefficients to correct misalignment and amplitude errors in the noise template, thereby matching the adapted template with the actual noise in the seismic data, reflection event-by-event. We also extend our approach to subtract noises that require several templates to be approximated. By itself, the method can only correct small misalignment errors (±5 ms in 50-Hz data) in the template; it relies on conventional least-squares (LS) adaptation to correct large-scale misalignment errors, such as wavelet mismatches and bulk shifts. Synthetic and real-data results illustrate that the CCT-based approach improves upon the LS approach and a curvelet-based approach described by Herrmann and Verschuur.
    BibTeX:
    @article{Neelamani_R_2010_j-geophysics_ada_scvct,
      author = {Ramesh (Neelsh) Neelamani and Anatoly Baumstein and Warren S. Ross},
      title = {Adaptive subtraction using complex-valued curvelet transforms},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2010},
      volume = {75},
      number = {4},
      pages = {V51--V60},
      url = {http://link.aip.org/link/?GPY/75/V51/1},
      doi = {http://dx.doi.org/10.1190/1.3453425}
    }
    
    Neidell, N.S. & Taner, M.T. Semblance and other coherency measures for multichannel data 1971 Geophysics
    Vol. 36(3), pp. 482-497 
    article  
    Abstract: The concept of semblance is introduced, along with a descriptive review of several of the more common likeness or coherence measures. Measures are considered from three points of view: the domain in which they are applied, the philosophy of their design, and the manner in which they are used. Crosscorrelation, the most familiar of the likeness criteria, is examined in detail. Differences of design philosophy are noted as expressing themselves by a change in the normalization. Semblance is shown to be related to an energy-normalized crosscorrelation and to share certain features of the summation method or stack which has been used recently as a coherence measure. Several coherence measures, including semblance, are considered in a problem environment---the determination of stacking velocities from multiple ground coverage seismic data. A noise-free synthetic example is studied in order to compare discrimination thresholds of the various methods. Semblance, when properly interpreted, proves to have the greatest power of discrimination among the candidates examined for the particular application.
    BibTeX:
    @article{Neidell_N_1971_j-geophysics_sem_ocmmd,
      author = {N. S. Neidell and M. Turhan Taner},
      title = {Semblance and other coherency measures for multichannel data},
      journal = {Geophysics},
      year = {1971},
      volume = {36},
      number = {3},
      pages = {482--497}
    }
    
    Nowak, E.J. & Imhof, M.G. Amplitude preservation of Radon-based multiple-removal filters 2006 Geophysics
    Vol. 71(5), pp. V123-V126 
    article DOI  
    Abstract: This study examines the effect of filtering in the Radon transform domain on reflection amplitudes. Radon filters are often used for removal of multiple reflections from normal moveout-corrected seismic data. The unweighted solution to the Radon transform reduces reflection amplitudes at both near and far offsets due to a truncation effect. However, the weighted solutions to the transform produce localized events in the transform domain, which minimizes this truncation effect. Synthetic examples suggest that filters designed in the Radon domain based on a weighted solution to the linear, parabolic, or hyperbolic transforms preserve the near- and far-offset reflection amplitudes while removing the multiples; whereas the unweighted solutions diminish reflection amplitudes which may distort subsequent amplitude-versus-offset (AVO) analysis
    BibTeX:
    @article{Nowak_E_2006_j-geophysics_amp_prbmrf,
      author = {Nowak, E. J. and Imhof, M. G.},
      title = {Amplitude preservation of Radon-based multiple-removal filters},
      journal = {Geophysics},
      year = {2006},
      volume = {71},
      number = {5},
      pages = {V123--V126},
      doi = {http://dx.doi.org/10.1190/1.2243711}
    }
    
    Nuzzo, L. & Quarta, T. Improvement in GPR coherent noise attenuation using $p$ and wavelet transforms 2004 Geophysics
    Vol. 69(3), pp. 789-802 
    article DOI URL 
    Abstract: We present a new application of modern filtering techniques to ground-penetrating radar (GPR) data processing for coherent noise attenuation. We compare the performance of the discrete wavelet transform (DWT) and the linear Radon transform (tau-p) to classical time-space and Fourier domain methods using a synthetic model and real data. The synthetic example simulates problems such as system ringing and surface scattering, which are common in real cases. The field examples illustrate the removal of nearly horizontal but variable-amplitude noise features. In such situations, classical space-domain techniques require several trials before finding an appropriate averaging window size. Our comparative analysis indicates that the DWT method is better suited for local filtering than are 2D frequency-domain (f-f) techniques, although the latter are computationally efficient. Radon-based methods are slightly superior than the techniques previously used for local directional filtering, but they are slow and quite sensitive to the p-sampling rate, p-range, and sizes of the muting zone. Our results confirm that Radon and wavelet methods are effective in removing noise from GPR images with minimal distortions of the signal.
    BibTeX:
    @article{Nuzzo_L_2004_j-geophysics_imp_gprcnatpwt,
      author = {Luigia Nuzzo and Tatiana Quarta},
      title = {Improvement in GPR coherent noise attenuation using $p$ and wavelet transforms},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2004},
      volume = {69},
      number = {3},
      pages = {789--802},
      url = {http://link.aip.org/link/?GPY/69/789/1},
      doi = {http://dx.doi.org/10.1190/1.1759465}
    }
    
    Pang, T., Lu, W. & Ma, Y. Adaptive multiple subtraction using a constrained $L_1$-norm
    method with lateral continuity
    2009 J. Appl. Geophys.
    Vol. 6, pp. 241-247 
    article  
    Abstract: The L1-norm method is one of the widely used matching filters for adaptive
    multiple subtraction. When the primaries and multiples are mixed together, the L1-norm
    method might damage the primaries, leading to poor lateral continuity. In this paper, we
    propose a constrained L1-norm method for adaptive multiple subtraction by introducing the
    lateral continuity constraint for the estimated primaries. We measure the lateral continuity
    using prediction-error fi lters (PEF). We illustrate our method with the synthetic Pluto dataset.
    The results show that the constrained L1-norm method can simultaneously attenuate the
    multiples and preserve the primaries.
    BibTeX:
    @article{Pang_T_2009_j-app-geophysics_ada_msucl1nmlc,
      author = {Tinghu Pang and Wenkai Lu and Yongjun Ma},
      title = {Adaptive multiple subtraction using a constrained $L_1$-norm
    method with lateral continuity}, journal = {J. Appl. Geophys.}, year = {2009}, volume = {6}, pages = {241--247} }
    Pesquet, J.-C. & Leporini, D. A new wavelet estimator for image denoising 1997
    Vol. 1IEE Sixth Int. Conf. Im. Proc. Appl., pp. 249-253 
    inproceedings  
    Abstract: Nonlinear wavelet/wavelet packet (W/WP) estimators are becoming popular methods for image denoising. Most of the related works are based on a hard or soft thresholding of the W/WP coefficients. In the additive noise scenario, the threshold which is optimal in a minimax [4, 31 or MDL [6] sense can be derived analytically. However this threshold value does not always lead to satisfactory results since it does not take into account the statistical properties of the image to be restored. It is therefore often necessary to empirically adjust the threshold values for each subband of the decomposition. A better solution may be to introduce some prior model of the W/WP coefficients of the original image and deduce their optimal estimates for a given criterion following a Bayesian approach as shown in Simoncelli and Adelson [9] or Pesquet et al. [8]. Then, the difficulty lies first in the choice of a both simple and reliable model and secondly in the robust determination of its hyperparameters. Another efficient cross-validation technique has also been proposed recently by Nowak [7] but it requires several realizations of the noisy image. In this work, we present a new nonlinear estimation method whose characteristics are adapted to the statistics of the image under study. In Section 2, we decribe the statistical problem which is addressed in this paper. In Section 3, we present different methods for adaptive image denoising. We emphasize the shortcomings of classical threshold estimators and propose an alternative approach. We then illustrate the interest of our method by simulation examples on satellite images.
    BibTeX:
    @inproceedings{Pesquet_J_1997_icipa_new_weid,
      author = {Pesquet, J.-C. and Leporini, D.},
      title = {A new wavelet estimator for image denoising},
      booktitle = {IEE Sixth Int. Conf. Im. Proc. Appl.},
      year = {1997},
      volume = {1},
      pages = {249--253}
    }
    
    Pica, A., Poulain, G., David, B., Magesan, M., Baldock, S., Weisser, T., Hugonnet, P. & Herrmann, P. 3D surface-related multiple modeling 2005 The Leading Edge
    Vol. 24, pp. 292-296 
    article  
    Abstract: The shape of seismic reflected energy on shot or CMP gathers can be ex-tremely complicated in comparison with the actual geometry of geologic generators. A simple synclinal structure may produce a triplication in the zero-offset domain, and migration processing is needed to resolve this situation. By comparison, multiple generation "squares" (at least for first-order multiples) the degree of complexity of the reverberated reflected energy, and, in general, there is no domain, neither time, depth, nor pre- or postmigrated, where multiples and primaries can be simplified simultaneously.
    BibTeX:
    @article{Pica_A_2005_j-tle_3d_srmm,
      author = {Pica, A. and G. Poulain and B. David and M. Magesan and S. Baldock and T. Weisser and P. Hugonnet and P. Herrmann},
      title = {3D surface-related multiple modeling},
      journal = {The Leading Edge},
      year = {2005},
      volume = {24},
      pages = {292--296},
      note = {Special section : Multiple attenuation}
    }
    
    Pokrovskaia, T. & Wombell, R. Attenuation of Residual Multiples and Coherent Noise in the Wavelet Transform Domain 2004 Proc. EAGE Conf. Tech. Exhib.  inproceedings  
    Abstract: Although generally very powerful, noise and multiple attenuation techniques can often leave
    remnants in seismic data. For example, noise from extraneous sources such as rigs and other
    boats can be hard to model and fully remove using standard methods. Similarly, multiple
    remnants are often present after multiple attenuation when multiples are generated by
    relatively complex geology, such as rugose water bottoms or salt, and as such do not conform
    to the assumptions of most multiple attenuation algorithms. These remnants can cause
    problems in later processing, for example through the generation of migration noise and
    contamination of AVO analysis etc. and therefore often need to be further attenuated in the
    processing sequence. As these remnants are often localized and may have high amplitudes
    compared to the underlying data, they can be relatively easy to identify and can be targeted in
    a different number of domains. The application of a wavelet transform (wavelet
    decomposition) on pre-stack data can be used to separate signal from coherent noise in both
    frequency and time. The noise can then be removed from the data by a variety of noise
    attenuation methods in the wavelet domain. We show two examples to illustrate the
    effectiveness of this method: the attenuation of residual multiples and attenuation of boat
    noise.
    BibTeX:
    @inproceedings{Pokrovskaia_T_2004_p-eage_att_rmcnwtd,
      author = {T. Pokrovskaia and R. Wombell},
      title = {Attenuation of Residual Multiples and Coherent Noise in the Wavelet Transform Domain},
      booktitle = {Proc. EAGE Conf. Tech. Exhib.},
      publisher = {European Assoc. Geoscientists Eng.},
      year = {2004},
      note = {Exp. abstracts}
    }
    
    Ristow, D. & Kosbahn, B. Time-varying prediction filtering by means of updating 1979 Geophys. Prospect.
    Vol. 27(1), pp. 40-61 
    article DOI URL 
    Abstract: In contrast to the conventional deconvolution technique (Wiener-Levinson), the spike-, predictive-, and gap-deconvolution is realized with the help of an adaptive updating technique of the prediction operator. As the prediction operator will be updated from sample to sample, this procedure can be used for time variant deconvolution. Updating formulae discussed are the adaptive updating formula and the sequential algorithm for the sequential estimation technique. This updating technique is illustrated using both synthetic and real seismic data.
    BibTeX:
    @article{Ristow_D_1979_j-geophys-prospect_tim_vpfmu,
      author = {Ristow, D. and Kosbahn, B.},
      title = {Time-varying prediction filtering by means of updating},
      journal = {Geophys. Prospect.},
      publisher = {Blackwell Publishing Ltd},
      year = {1979},
      volume = {27},
      number = {1},
      pages = {40--61},
      url = {http://dx.doi.org/10.1111/j.1365-2478.1979.tb00958.x},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1979.tb00958.x}
    }
    
    Robinson, E.A. & Treitel, S. Principles of digital Wiener filtering 1967 Geophys. Prospect.
    Vol. 15(3), pp. 311-332 
    article DOI URL 
    Abstract: The theory of statistical communication provides an invaluable framework within which it is possible to formulate design criteria and actually obtain solutions for digital filters. These are then applicable in a wide range of geophysical problems. The basic model for the filtering process considered here consists of an input signal, a desired output signal, and an actual output signal. If one minimizes the energy or power existing in the difference between desired and actual filter outputs, it becomes possible to solve for the so-called optimum, or least squares filter, commonly known as the â??Wienerâ? filter. In this paper we derive from basic principles the theory leading to such filters. The analysis is carried out in the time domain in discrete form. We propose a model of a seismic trace in terms of a statistical communication system. This model trace is the sum of a signal time series plus a noise time series. If we assume that estimates of the signal shape and of the noise autocorrelation are available, we may calculate Wiener filters which will attenuate the noise and sharpen the signal. The net result of these operations can then in general be expected to increase seismic resolution. We show a few numerical examples to illustrate the model's applicability to situations one might find in practice.
    BibTeX:
    @article{Robinson_E_1967_j-geophys-prospect_pri_dwf,
      author = {Robinson, E. A. And Treitel, S.},
      title = {Principles of digital Wiener filtering},
      journal = {Geophys. Prospect.},
      publisher = {Blackwell Publishing Ltd},
      year = {1967},
      volume = {15},
      number = {3},
      pages = {311--332},
      url = {http://dx.doi.org/10.1111/j.1365-2478.1967.tb01793.x},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1967.tb01793.x}
    }
    
    Schimmel, M. & Paulssen, H. Noise reduction and detection of weak, coherent signals through phase-weighted stacks 1997 Geophys. J. Int.
    Vol. 130(2), pp. 495-505 
    article DOI  
    Abstract: We present a new tool for efficient incoherent noise reduction for array data employing complex trace analysis. An amplitude-unbiased coherency measure is designed based on the instantaneous phase, which is used to weight the samples of an ordinary, linear stack. The result is called the phase-weighted stack (PWS) and is cleaned from incoherent noise. PWS thus permits detection of weak but coherent arrivals. The method presented can easily be extended to phase-weighted cross-correlations or be applied in the $tau p$ domain. We illustrate and discuss the advantages and disadvantages of PWS in comparison with other coherency measures and present examples. We further show that our non-linear stacking technique enables us to detect a weak lower-mantle P-to-S conversion from a depth of approximately 840 km on array data. Hints of an 840 km discontinuity have been reported; however, such a discontinuity is not yet established due to the lack of further evidence.
    BibTeX:
    @article{Schimmel_M_1997_j-geophys-j-int_noi_rdwcstpws,
      author = {M. Schimmel and H. Paulssen},
      title = {Noise reduction and detection of weak, coherent signals through phase-weighted stacks},
      journal = {Geophys. J. Int.},
      year = {1997},
      volume = {130},
      number = {2},
      pages = {495--505},
      doi = {http://dx.doi.org/10.1111/j.1365-246X.1997.tb05664.x}
    }
    
    Sinha, S., Routh, P.S., Anno, P.D. & Castagna, J.P. Spectral decomposition of seismic data with continuous-wavelet transform 2005 Geophysics
    Vol. 70, pp. P19-P25 
    article  
    Abstract: This paper presents a new methodology for computing
    a time-frequency map for nonstationary signals
    using the continuous-wavelet transform (CWT). The
    conventional method of producing a time-frequency
    map using the short time Fourier transform (STFT)
    limits time-frequency resolution by a predefined window
    length. In contrast, the CWT method does not
    require preselecting a window length and does not
    have a fixed time-frequency resolution over the timefrequency
    space. CWT uses dilation and translation of
    a wavelet to produce a time-scale map. A single scale
    encompasses a frequency band and is inversely proportional
    to the time support of the dilated wavelet.
    Previous workers have converted a time-scale map
    into a time-frequencymap by taking the center frequencies
    of each scale. We transform the time-scale map by
    taking the Fourier transform of the inverse CWT to produce
    a time-frequency map. Thus, a time-scale map is
    converted into a time-frequency map in which the amplitudes
    of individual frequencies rather than frequency
    bands are represented. We refer to such a map as the
    time-frequency CWT (TFCWT).
    We validate our approach with a nonstationary synthetic
    example and compare the results with the STFT
    and a typical CWT spectrum. Two field examples illustrate
    that the TFCWT potentially can be used to detect
    frequency shadows caused by hydrocarbons and to
    identify subtle stratigraphic features for reservoir characterization.
    INTRODUCTION
    BibTeX:
    @article{Sinha_S_2005_j-geophysics_spe_dsdcwt,
      author = {Satish Sinha and Partha S. Routh and Phil D. Anno and John P. Castagna},
      title = {Spectral decomposition of seismic data with continuous-wavelet transform},
      journal = {Geophysics},
      year = {2005},
      volume = {70},
      pages = {P19--P25}
    }
    
    Spitz, S. Pattern recognition, spatial predictability, and subtraction of multiple events 1999 The Leading Edge
    Vol. 18, pp. 55-58 
    article  
    Abstract: The suppression of multiple events is a crucial task when processing seismic data because they can obscure, or sometimes be mistaken for, genuine reflections. This task has been with us for a long time. The ambiguities caused by multiple reflections are perhaps more acutely felt today, with the growing need for reliable seismic attributes at the target level. From the processing side, although many techniques have been described in the literature, only a few have reached the industrial stage.
    BibTeX:
    @article{Spitz_S_1999_j-tle_pat_rspsme,
      author = {Spitz, S.},
      title = {Pattern recognition, spatial predictability, and subtraction of multiple events},
      journal = {The Leading Edge},
      year = {1999},
      volume = {18},
      pages = {55--58}
    }
    
    Spitz, S., Hampson, G. & Pica, A. Simultaneous Source Separation Using Wave Field Modeling and PEF Adaptive Subtraction 2009 Proc. EAGE Marine Seismic Workshop  inproceedings  
    Abstract: The acquisition of n-shots, more or less simultaneously, increases acquisition efficiency and collects a
    wider range of information for imaging and reservoir characterisation. Its success relies critically on the
    ability to separate n-shots from one recording. Using a difficult data example we show that a PEF-based
    adaptive subtraction of the estimated wavefield due to a secondary source provides an effective separation
    of the sources.
    BibTeX:
    @inproceedings{Spitz_S_2009_p-eage-marine-w_sim_sswfmpefas,
      author = {S. Spitz and G. Hampson and A. Pica},
      title = {Simultaneous Source Separation Using Wave Field Modeling and PEF Adaptive Subtraction},
      booktitle = {Proc. EAGE Marine Seismic Workshop},
      publisher = {European Assoc. Geoscientists Eng.},
      year = {2009}
    }
    
    Taner, M.T. Long period sea-floor multiples and their suppression 1980 Geophys. Prospect.
    Vol. 28(1), pp. 30-48 
    article DOI  
    Abstract: Multiple sea-floor reflections in deep water often are not effectively suppressed by either CDP stacking nor standard predictive deconvolution methods. These methods fail because the reflection coefficient varies markedly with angle of incidence and also because of the variation of arrival time with offset and because of dip. For a reasonablly flat sea-floor, multiples of various orders and the primary sea-floor reflection which have all been reflected at nearly the same angle lie along a straight line through the origin in time-offset space. This line is called the "radial direction." The multiples which lie along this line show a systematic relationship because they all experience the same water-bottom reflection effect. In other words, multiples behave in a stationary manner along the radial directions on multi-trace seismic records. A technique of multi-channel predictive deconvolution, called "Radial Multiple Suppression," utilizes this aspect to design Wiener operators for the prediciton and suppression of water bottom multiples.

    The effectiveness of the technique is demonstrated by the study of field records, autocorrelations, velocity analyses, and stacked sections before and after Radial Multiple Suppression processing.

    BibTeX:
    @article{Taner_M_1980_j-geophys-prospect_lon_psfms,
      author = {Taner, M. T.},
      title = {Long period sea-floor multiples and their suppression},
      journal = {Geophys. Prospect.},
      year = {1980},
      volume = {28},
      number = {1},
      pages = {30--48},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1980.tb01209.x}
    }
    
    Taner, M.T., Koehler, F. & Sheriff, R.E. Complex seismic trace analysis 1979 Geophysics
    Vol. 44(6), pp. 1041-1063 
    article DOI URL 
    Abstract: The conventional seismic trace can be viewed as the real component of a complex trace which can be uniquely calculated under usual conditions. The complex trace permits the unique separation of envelope amplitude and phase information and the calculation of instantaneous frequency. These and other quantities can be displayed in a color-encoded manner which helps an interpreter see their interrelationship and spatial changes. The significance of color patterns and their geological interpretation is illustrated by examples of seismic data from three areas.
    BibTeX:
    @article{Taner_M_1979_j-geophysics_com_sta,
      author = {M. T. Taner and F. Koehler and R. E. Sheriff},
      title = {Complex seismic trace analysis},
      journal = {Geophysics},
      publisher = {SEG},
      year = {1979},
      volume = {44},
      number = {6},
      pages = {1041--1063},
      url = {http://link.aip.org/link/?GPY/44/1041/1},
      doi = {http://dx.doi.org/10.1190/1.1440994}
    }
    
    Taner, M.T., O'Doherty, R.F. & Koehler, F. Long period multiple suppression by predictive deconvolution in the $x-t$ domain 1995 Geophys. Prospect.
    Vol. 43(4), pp. 433-468 
    article DOI URL 
    Abstract: There are two forms of systematic error in conventional deconvolution as applied to the problem of suppressing multiples with periodicities longer than a hundred milliseconds. One of these is the windowing effect due to the assumption that a true autocorrelation function can be computed from a finite portion of data. The second form of error concerns the assumption of periodicity, which is strictly true only at zero offset for a 1D medium. The seriousness of these errors increased with the lengthening of the multiple period.This paper describes and illustrates a rigorous 2D solution to the predictive deconvolution equations that overcomes both of the systematic errors of conventional 1D approaches. This method is applicable to both the simple or trapped system and to the complex or peg-leg system of multiples. It does not require that the design window be six to ten times larger compared to the operator dimensions and it is accurate over a wide range of propagation angles. The formulation is kept strictly in the sense of the classical theory of prediction. The solution of normal equations are obtained by a modified conjugate gradient method of solution developed by Koehler. In this algorithm, the normal equations are not modified by the autocorrelation approximation.As with all linear methods, approximate stationary attitude in the multiple generating process is assumed. This method has not been tested in areas where large changes in the characteristic of the multiple-generating mechanism occur within a seismic spread length.
    BibTeX:
    @article{Taner_M_1995_j-geophys-prospect_lon_pmspdxtd,
      author = {Taner, M. Turhan and O'Doherty, Ronan F. and Koehler, Fulton},
      title = {Long period multiple suppression by predictive deconvolution in the $x-t$ domain},
      journal = {Geophys. Prospect.},
      publisher = {Blackwell Publishing Ltd},
      year = {1995},
      volume = {43},
      number = {4},
      pages = {433--468},
      url = {http://dx.doi.org/10.1111/j.1365-2478.1995.tb00261.x},
      doi = {http://dx.doi.org/10.1111/j.1365-2478.1995.tb00261.x}
    }
    
    Trad, D., Ulrych, T. & Sacchi, M. Latest views of the sparse Radon transform 2003 Geophysics
    Vol. 68(1), pp. 386-399 
    article DOI URL 
    Abstract: The Radon transform (RT) suffers from the typical problems of loss of resolution and aliasing that arise as a consequence of incomplete information, including limited aperture and discretization. Sparseness in the Radon domain is a valid and useful criterion for supplying this missing information, equivalent somehow to assuming smooth amplitude variation in the transition between known and unknown (missing) data. Applying this constraint while honoring the data can become a serious challenge for routine seismic processing because of the very limited processing time available, in general, per common midpoint. To develop methods that are robust, easy to use and flexible to adapt to different problems we have to pay attention to a variety of algorithms, operator design, and estimation of the hyperparameters that are responsible for the regularization of the solution.In this paper, we discuss fast implementations for several varieties of RT in the time and frequency domains. An iterative conjugate gradient algorithm with fast Fourier transform multiplication is used in all cases. To preserve the important property of iterative subspace methods of regularizing the solution by the number of iterations, the model weights are incorporated into the operators. This turns out to be of particular importance, and it can be understood in terms of the singular vectors of the weighted transform. The iterative algorithm is stopped according to a general cross validation criterion for subspaces. We apply this idea to several known implementations and compare results in order to better understand differences between, and merits of, these algorithms.
    BibTeX:
    @article{Trad_D_2003_j-geophysics_lat_vsrt,
      author = {Daniel Trad and Tadeusz Ulrych and Mauricio Sacchi},
      title = {Latest views of the sparse Radon transform},
      journal = {Geophysics},
      publisher = {SEG},
      year = {2003},
      volume = {68},
      number = {1},
      pages = {386--399},
      url = {http://link.aip.org/link/?GPY/68/386/1},
      doi = {http://dx.doi.org/10.1190/1.1543224}
    }
    
    Verschuur, D. & Berkhout, A. Adaptive surface related multiple elimination 1992 Geophysics
    Vol. 57(9), pp. 1166-1177 
    article  
    Abstract: The major amount of multiple energy in seismic data
    is related to the large reflectivity of the surface. A
    method is proposed for the elimination of all surfacerelated
    multiples by means of a process that removes
    the influence of the surface reflectivity from the data.
    An important property of the proposed multiple elimination
    process is that no knowledge of the subsurface
    is required. On the other hand, the source signature
    and the surface reflectivity do need to be provided. As
    a consequence, the proposed process has been implemented
    adaptively, meaning that multiple elimination
    is designed as an inversion process where the source
    and surface reflectivity properties are estimated and
    where the multiple-free data equals the inversion residue.
    Results on simulated data and field data show
    that the proposed multiple elimination process should
    be considered as one of the key inversion steps in
    stepwise seismic inversion.
    BibTeX:
    @article{Verschuur_D_1992_j-geophysics_ada_srme,
      author = {Verschuur, D. and Berkhout, A.},
      title = {Adaptive surface related multiple elimination},
      journal = {Geophysics},
      year = {1992},
      volume = {57},
      number = {9},
      pages = {1166--1177}
    }
    
    Verschuur, D.J. Seismic multiple removal techniques: past, present and future 2006   book  
    Abstract: This book presents an overview of multiple removal methods that have been developed within the seismic exploration industry over the last five decades. It handles move-out based filtering, predictive deconvolution, wave equation-based prediction and subtraction and surface-related multiple removal, both for surface-related as well as internal multiples. The mathematical complexity is restricted to a minimum, while more emphasis is put at understanding the physical principles of these methods and their mutual relationships. The different multiple removal techniques are illustrated with a variety of synthetic and field data examples. For the surface-related multiple removal method, extra attenuation is paid to the practical aspects, extension to 3D data and land data applications. Finally, an outlook is given on how multiples can be turned from noise into usable data.
    BibTeX:
    @book{Verschuur_D_2006_book_sei_mrtppf,
      author = {Verschuur, D. J.},
      title = {Seismic multiple removal techniques: past, present and future},
      publisher = {EAGE Publications},
      year = {2006}
    }
    
    Verschuur, D.J. & Berkhout, A.J. Estimation of multiple scattering by iterative inversion, Part II: Practical aspects and examples 1997 Geophysics
    Vol. 62(5), pp. 1596-1611 
    article DOI URL 
    Abstract: A surface-related multiple-elimination method can be formulated as an iterative procedure: the output of one iteration step is used as input for the next iteration step (part I of this paper). In this paper (part II) it is shown that the procedure can be made very efficient if a good initial estimate of the multiple-free data set can be provided in the first iteration, and in many situations, the Radon-based multiple-elimination method may provide such an estimate. It is also shown that for each iteration, the inverse source wavelet can be accurately estimated by a linear (least-squares) inversion process. Optionally, source and detector variations and directivity effects can be included, although the examples are given without these options. The iterative multiple elimination process, together with the source wavelet estimation, are illustrated with numerical experiments as well as with field data examples. The results show that the surface-related multiple-elimination process is very effective in time gates where the moveout properties of primaries and multiples are very similar (generally deep data), as well as for situations with a complex multiple-generating system
    BibTeX:
    @article{Verschuur_D_1997_j-geophysics_est_msiip2pae,
      author = {D. J. Verschuur and A. J. Berkhout},
      title = {Estimation of multiple scattering by iterative inversion, Part II: Practical aspects and examples},
      journal = {Geophysics},
      year = {1997},
      volume = {62},
      number = {5},
      pages = {1596--1611},
      url = {http://link.aip.org/link/?GPY/62/1596/1},
      doi = {http://dx.doi.org/10.1190/1.1444262}
    }
    
    Wang, Y. Multiple subtraction using an expanded multichannel matching filter 2003 Geophysics
    Vol. 68, pp. 346-354 
    article  
    Abstract: An expanded multichannel matching (EMCM) filter
    is proposed for the adaptive subtraction in seismic multiple
    attenuation. For a normal multichannel matching
    filter where an original seismic trace is matched by a
    group of multiple-model traces, the lateral coherency
    of adjacent traces is likely to be exploited to discriminate
    the overlapped multiple and primary reflections. In
    the proposed EMCM filter, a seismic trace is matched
    by not only a group of the ordinary multiple-model
    traces but also their adjoints generated mathematically.
    The adjoints of a multiple-model trace include its first
    derivative, its Hilbert transform, and the derivative of
    the Hilbert transform. The convolutional coefficients associated
    with the normal multichannel filter can be represented
    as a 2Doperator in the time-space domain. This
    2D operator is expanded with an additional spatial dimension
    in theEMCMfilter to improve the robustness of
    the adaptive subtraction. The multiple-model traces are
    generated using moveout equations to afford efficiency
    in the multiple attenuation application.
    BibTeX:
    @article{Wang_Y_2003_j-geophysics_mu_suemmf,
      author = {Yanghua Wang},
      title = {Multiple subtraction using an expanded multichannel matching filter},
      journal = {Geophysics},
      year = {2003},
      volume = {68},
      pages = {346--354}
    }
    
    Weglein, A.B., Gasparotto, F.A., Carvalho, P.M. & Stolt, R.H. An inverse-scattering series method for attenuating multiples in seismic reflection data 1997 Geophysics
    Vol. 62(6), pp. 1975-1989 
    article DOI URL 
    Abstract: We present a multidimensional multiple-attenuation method that does not require any subsurface information for either surface or internal multiples. To derive these algorithms, we start with a scattering theory description of seismic data. We then introduce and develop several new theoretical concepts concerning the fundamental nature of and the relationship between forward and inverse scattering. These include (1) the idea that the inversion process can be viewed as a series of steps, each with a specific task; (2) the realization that the inverse-scattering series provides an opportunity for separating out subseries with specific and useful tasks; (3) the recognition that these task-specific subseries can have different (and more favorable) data requirements, convergence, and stability conditions than does the original complete inverse series; and, most importantly, (4) the development of the first method for physically interpreting the contribution that individual terms (and pieces of terms) in the inverse series make toward these tasks in the inversion process, which realizes the selection of task-specific subseries. To date, two task-specific subseries have been identified: a series for eliminating free-surface multiples and a series for attenuating internal multiples. These series result in distinct algorithms for free-surface and internal multiples, and neither requires a model of the subsurface reflectors that generate the multiples. The method attenuates multiples while preserving primaries at all offsets; hence, these methods are equally well suited for subsequent poststack structural mapping or prestack amplitude analysis. The method has demonstrated its usefulness and added value for free-surface multiples when (1) the overburden has significant lateral variation, (2) reflectors are curved or dipping, (3) events are interfering, (4) multiples are difficult to identify, and (5) the geology is complex. The internal-multiple algorithm has been tested with good results on band-limited synthetic data; field data tests are planned. This procedure provides an approach for attenuating a significant class of heretofore inaccessible and troublesome multiples. There has been a recent rejuvenation of interest in multiple attenuation technology resulting from current exploration challenges, e.g., in deep water with a variable water bottom or in subsalt plays. These cases are representative of circumstances where 1-D assumptions are often violated and reliable detailed subsurface information is not available typically. The inverse scattering multiple attenuation methods are specifically designed to address these challenging problems. To date it is the only multidimensional multiple attenuation method that does not require 1-D assumptions, moveout differences, or ocean-bottom or other subsurface velocity or structural information for either free-surface or internal multiples. These algorithms require knowledge of the source signature and near-source traces. We describe several current approaches, e.g., energy minimization and trace extrapolation, for satisfying these prerequisites in a stable and reliable manner.
    BibTeX:
    @article{Weglein_A_1997_j-geophysics_inv_ssmamsrd,
      author = {Arthur B. Weglein and Fernanda Araujo Gasparotto and Paulo M. Carvalho and Robert H. Stolt},
      title = {An inverse-scattering series method for attenuating multiples in seismic reflection data},
      journal = {Geophysics},
      publisher = {SEG},
      year = {1997},
      volume = {62},
      number = {6},
      pages = {1975--1989},
      url = {http://link.aip.org/link/?GPY/62/1975/1},
      doi = {http://dx.doi.org/10.1190/1.1444298}
    }
    
    Weisser, T., Pica, A.L., Herrmann, P. & Taylor, R. Wave equation multiple modelling: acquisition independent 3D SRME 2006 First Break
    Vol. 24, pp. 75-79 
    article  
    Abstract: Marine seismic data acquired over structured or
    rugose seafloors contain complex multiple wavefields,
    and in deep water this multiple energy may
    contaminate our target zone, either directly by
    overlaying it or indirectly as the migration process smears the
    energy up across shallower events.
    Recent advances in de-multiple processing technology
    have seen the industry move to full 3D surface related multiple
    elimination (3D SRME) to better deal with these complex
    multiple wavefields. But this has come at some cost as 3D
    SRME techniques generally perform better with high-density
    acquisition and require heavy interpolation pre-processing.
    We present a test of CGG's alternative approach to 3D SRME
    to determine whether high density acquisition is a necessity
    for effective 3D SRME, or an unnecessary expense.
    BibTeX:
    @article{Weisser_T_2006_j-fb_wav_emmai3dsrme,
      author = {Weisser, T. and A. L. Pica and P. Herrmann and R. Taylor},
      title = {Wave equation multiple modelling: acquisition independent 3D SRME},
      journal = {First Break},
      year = {2006},
      volume = {24},
      pages = {75--79}
    }
    
    Wu, M. & Wang, S. A case study of $f-k$ demultiple on 2D offshore seismic data 2011 The Leading Edge
    Vol. 30(4), pp. 446-450 
    article DOI URL 
    Abstract: The identification and attenuation of multiples has been and continues to be one of the most complex seismic noise problems facing exploration geophysicists. Effective separation of multiples from primary reflections is a key step in multiple attenuation, and this paper examines a robust $f-k$ multiple suppression technique that can successfully be applied to offshore 2D data sets that violate the assumptions of alternative multiple suppression technologies.
    BibTeX:
    @article{Wu_M_2011_j-tle_cas_sfkd2dosd,
      author = {Mei Wu and Shungen Wang},
      title = {A case study of $f-k$ demultiple on 2D offshore seismic data},
      journal = {The Leading Edge},
      publisher = {SEG},
      year = {2011},
      volume = {30},
      number = {4},
      pages = {446--450},
      url = {http://link.aip.org/link/?LEE/30/446/1},
      doi = {http://dx.doi.org/10.1190/1.3575293}
    }
    
    Yilmaz, Ö. Seismic data analysis: processing, inversion, and interpretation of seismic data 2001   book  
    BibTeX:
    @book{Yilmaz_O_2001_book_sei_dapiisd,
      author = {Yilmaz, Ö.},
      title = {Seismic data analysis: processing, inversion, and interpretation of seismic data},
      publisher = {Soc. Expl. Geophysicists},
      year = {2001}
    }
    
    Encyclopedia of Complexity and Systems Science 2009
    Vol. 1-10 
    book  
    Abstract: Encyclopedia of Complexity and Systems Science provides an authoritative single source for understanding and applying the concepts of complexity theory together with the tools and measures for analyzing complex systems in all fields of science and engineering. The science and tools of complexity and systems science include theories of self-organization, complex systems, synergetics, dynamical systems, turbulence, catastrophes, instabilities, nonlinearity, stochastic processes, chaos, neural networks, cellular automata, adaptive systems, and genetic algorithms. Examples of near-term problems and major unknowns that can be approached through complexity and systems science include: The structure, history and future of the universe; the biological basis of consciousness; the integration of genomics, proteomics and bioinformatics as systems biology; human longevity limits; the limits of computing; sustainability of life on earth; predictability, dynamics and extent of earthquakes, hurricanes, tsunamis, and other natural disasters; the dynamics of turbulent flows; lasers or fluids in physics, microprocessor design; macromolecular assembly in chemistry and biophysics; brain functions in cognitive neuroscience; climate change; ecosystem management; traffic management; and business cycles. All these seemingly quite different kinds of structure formation have a number of important features and underlying structures in common. These deep structural similarities can be exploited to transfer analytical methods and understanding from one field to another. This unique work will extend the influence of complexity and system science to a much wider audience than has been possible to date.
    BibTeX:
    @book{Meyers_R_2009_book_enc_css,,
      title = {Encyclopedia of Complexity and Systems Science},
      publisher = {Springer},
      year = {2009},
      volume = {1-10}
    }
    

    QuickSearch:   Number of matching entries: 0.

    Search Settings