II European guidelines for quality assurancein breast cancer screening and diagnosis Fourth edition Supplements. October 2011 after her work on the digital mammography supplement was complete. The completion of the Supplements to the Fourth Edition of the European Guidelines for quality assurance in breast cancer screening and diagnosis.

  1. See also Chapter 3.6.1 quality assurance practices for cytology, In: European Cytopathology 2007, 18, 67–78 ª European Communities 2007 Journal compilation ª 2007 Blackwell Publishing Ltd H. Wiener et al. 77 Guidelines for Quality Assurance in Cervical Cancer Screening.
  2. Cytopathology Guidelines in European Guidelines for Quality Assurance in Mammography Screening. 2nd edition Europe against Cancer; 1996. Role of cytology and needle biopsy in the diagnosis of breast cancer.
  3. Dissemination of periodic mammography and patterns of use, by birth cohort, in Catalonia (Spain). Europe Against Cancer Programme": European Guideline for Quality Assurance in Mammography Screening 2nd edition. Luxemburg: European Comission. Mixed-effects models in S and S-plus 2nd edition.
  4. European Guidelines for Quality Assurance in Mammography Screening(3rd Edition) by European Communities, N. Schou ten, Nicholas Perry, N. Perry Paperback, 366 Pages, Published 2001 by Office For Official Publications Of The European Communities ISBN-13: 978-92-894-1145-5, ISBN: 92-894-1145-7.
  • About this Journal ·
  • Abstracting and Indexing ·
  • Aims and Scope ·
  • Article Processing Charges ·
  • Bibliographic Information ·
  • Editorial Board ·
  • Editorial Workflow ·
  • Publication Ethics ·
  • Reviewer Resources ·
  • Submit a Manuscript ·
  • Subscription Information ·
  • Open Special Issues ·
  • Published Special Issues ·

Now, Later of Never: Multicenter Randomized Controlled Trial Call—Is Surgery Necessary after Atypical Breast Core Biopsy Results in Mammographic Screening Settings?

The University of British Columbia, Vancouver, Canada V62 1Y6

Received 14 October 2014; Accepted 21 March 2015

Academic Editor: C. H. Yip

Copyright © 2015 Nikita Makretsov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Breast cancer mammographic screening leads to detection of premalignant and preinvasive lesions with an increasing frequency. Nevertheless, current epidemiologic evidence indicates that the screening reduces breast cancer specific mortality, but not overall mortality in breast cancer patients. The evidence is lacking whether aggressive eradication of DCIS (preinvasive form of breast carcinoma) by surgery and radiation is of survival benefit, as long-term breast cancer specific mortality in a cohort of patients with DCIS is already in a single digit percent range. Furthermore, it is currently not known whether the aggressive surgical eradication of atypical breast lesions which fall short of diagnosis of DCIS is of any benefit for the patients. Here we propose a model for a randomized controlled trial to generate high level evidence and solve this dilemma.

1. Introduction and Background

Mammographic breast cancer screening is under scrutiny as there is no firm evidence that it reduces mortality in female population. At the same time there is growing evidence that it leads to overtreatment due to false positive results [1, 2]. False positives include those patients with positive mammograms, who underwent breast core biopsy and were found to have no cancer by pathology. Meanwhile 3–5% of all breast core biopsies performed in mammographic setting remain indeterminate and are labeled by pathologists as “atypical” breast lesions [3]. These results lead to diagnostic surgical excision of breast tissue with placement of needle wire in the area of concern in the breast. These procedures are complex, expensive, and invasive and invariably cause stress and anxiety in patients. The efficiency of these procedures is measured in new cancers detected and is estimated to be 2–15% of all patients who undergo such procedures. Of these cancers, 85–90% are preinvasive ductal carcinoma in situ, associated with excellent outcomes. The remainder 5–10% cases are true invasive carcinomas, of which more than 80% have excellent prognostic features (3–5% risk of 5-year mortality) [4]. Thus, even the most conservative estimates show significant overdiagnosis and overtreatment as a result of this approach [1, 2]. Randomized controlled trials have never yet been proposed to explore the equivalence of conservative management of atypical breast lesions. It is time to explore conservative watchful management of atypical preinvasive breast lesions similar to preclinical prostate cancers, already adapted by medical community [5]. The aim of this short essay is to propose an overall design for this noninferiority trial.

European Guidelines For Quality Assurance In Mammography Screening 2nd Edition Edition

2. Research Question, PICO (Participants, Intervention, Comparison, and Outcome)

Is watchful conservative management equivalent to surgical management atypical breast lesions in mammographic screening settings?

2.1. Participants
2.1.1. Inclusion Criteria, Patients

Inclusion criteria are as follows:(i)females in UK, US, and Canada, aged 50–70 (regular mammographic screening age group),(ii)any ethnicity,(iii)mammographic screening participant with mammography positive result, that is, nonpalpable abnormality or indeterminate calcifications by screening mammogram, requiring biopsy,(iv)pathology biopsy result indicating at least one of the following (in the absence of invasive carcinoma): atypical ductal hyperplasia, atypical lobular hyperplasia, flat epithelial atypia, breast papilloma with atypia, breast fibroadenoma with atypia, and ductal carcinoma in situ of low grade, and their synonyms, as outlined by Pinder [6],(v)no prior diagnosis of breast cancer (either ipsi- or contralateral),(vi)no family history of breast cancer.

2.1.2. Inclusion Criteria, Clinical Centers

The procedures should be performed at the participating centers with specialized breast unit with multidisciplinary breast clinical management team in place (including radiology, pathology, and surgery), which follow best practice guidelines and participate in appropriate quality assurance schemes [6–8].

2.1.3. Exclusion Criteria, Patients

Exclusion criteria are as follows:(i)clinically diagnosed or self-diagnosed palpable breast masses,(ii)any masses diagnosed by screening mammography,(iii)core biopsy diagnosis of invasive carcinoma of any type or high grade ductal carcinoma in situ, or diagnosis of encapsulated papillary carcinoma,(iv)patients with family history of breast cancer or genetic conditions associated with higher risk of breast cancer,(v)patients with previously diagnosed and treated breast cancers,(vi)patients with other malignancies, except noninvasive skin cancers,(vii)patients who are unfit for surgery.

2.1.4. Exclusion Criteria, Clinical Centers

The clinical centers which do not have organized breast units or multidisciplinary breast service in place or perform breast surgery occasionally are excluded.

2.2. Intervention

The main idea behind this trial design is evaluation of conservative management of atypical breast lesions detected by core biopsy by annual mammographic follow-up. Therefore, conservative management constitutes “intervention” in this trial. This should not be confused with standard care (i.e., surgical management), which is used for comparison.

2.3. Comparison

Current standard of care: needle wire radiologically localized surgical excision of the abnormal area in the breast, after core biopsy diagnosis of atypia.

2.4. Outcomes

Primary Outcome. Primary outcome is mortality (overall and breast cancer specific).

Secondary Outcomes. Secondary outcomes are quality of life score; follow-up mastectomy rates; economic measures such as cost per QALY [9]; rates and time to surgical excisional biopsy in intervention group.

3. Trial Design

Design Type. The type of the design is two arms, noninferiority randomized controlled trial.

Allocation Concealment. The recruiters will be shielded from allocation of the patients into study arms.

Randomization. It is a central internet based randomization with stratification by age.

Blinding. Challenges related to this trial are as follows: the patient blinding is not possible, as masking of surgery as a procedure is problematic. Leaving just a surgical scar without tissue removal appears nonethical and disfiguring for such sensitive body part as breast. Also, due to nature of nonpalpable breast lesions, they require placement of needle wire into the breast prior to surgery to guide surgical removal. Thus, the patient blinding is not at all possible. Surgeon blinding is also not possible for obvious reason. The radiologists will be blinded.

3.1. Sample Size

Entry Assumption. 10-year breast specific mortality in both groups could be estimated to be at least 2% (at least similar to DCIS of the breast).

If there is truly no difference between the standard and experimental management, then 1680 patients are required to be 80% sure that the limits of a two-sided 90% confidence interval will exclude a difference between the standard and experimental group of more than 2%. Online sample size calculator for equivalence trial with binary outcome measures was used (http://www.sealedenvelope.com/power/binary-equivalence/).

3.2. Feasibility

According to NHS data on breast screening [4] 1.7 million women were screened by mammography in England only in 2010. Of these, estimate of 7-8% had positive mammograms and required core biopsy (119.000–136.000 women). Of these, an estimate of 3–5% (3570–6800 women) has eligible atypical lesions after core biopsy pathology. Assuming that 50% of women will not consent to the study and prefer surgical excision, and additional 10% will not meet the eligibility criteria, the target study population could potentially be recruited into a multicenter trial within 1–3 years.

3.3. Follow-Up

There should be at least 10-year follow-up in order to reach conclusion, and this might appear problematic. Interim analysis can be performed at 5-year follow-up anniversary, if it is approved by the trial monitoring committee.

4. Trial Management

Multicenter trials have significant organizational challenges and require thorough planning and adequate funding. Planning committee should be established with representatives from all participating centers. Feasibility should be determined after thorough consideration of all potential risks and benefits. Coordinating center should be designated, responsible for randomization scheme delivery, trial management, data collection from all the centers, and data analysis. The organizational structure of the entire trial should be established, with all areas of responsibility and authority clearly declared. Steering and monitoring committees will need to be organized in such context and the standards of quality defined [10, 11].

5. Monitoring of Process and Other Specific Challenges

The patients in intervention and comparison arms will be followed up at the same mammographic intervals (once a year).

If the patient in the intervention arm is diagnosed with positive mammogram or by clinical self-examination during follow-up period, the needle wire localization surgical biopsy or lumpectomy will be performed, using similar methodology as in the comparison group. If diagnosis of invasive cancer or DCIS is established after such surgical excision, the women should undergo treatment as per current standard practice, similar to comparison group.

It is important that the radiologists and pathologists, who read the participants’ follow-up investigations, remain blinded of prior mammograms and the initial core biopsy result for the entire period of trial.

6. Analysis

The data analysis should be performed by an intention to treat method.

Primary Outcome. The primary outcomes are breast cancer specific (Table 1) and overall mortality (not shown for brevity, due to similar approach). The 2 × 2 tables will have to be constructed and populated; 95% confidence intervals will be calculated as described earlier [12, 13]. In addition, Kaplan-Meier analysis with log-rank test could be performed, if the event rate will allow for a statistically valid comparison.

Published online 2013 Aug 4. doi: 10.1007/s13244-013-0269-1
PMID: 23912879
This article has been cited by other articles in PMC.

Abstract

Objectives

Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide.

Methods

Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment.

Results

Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines.

Conclusions

The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources.

Main messages

An effective QA program should be practical to implement in a clinical setting.

QA should address the various stages of the imaging chain: acquisition, processing and display.

AEC system QC testing is simple to implement and provides information on equipment performance.

Keywords: Mammography, Quality control, Quality assurance, Dose, Image quality

Introduction

To ensure the key goals of mammography are achieved, quality standards should be adopted. Ideally, these should be wide in scope and address the various aspects with impact on the mammography imaging process (e.g. technical, clinical and training).

A systematic approach for assessing critical performance indicators can be achieved through the implementation of a quality assurance (QA) program. QA provides a framework for constant improvement through a feedback mechanism. It allows the identification of deviations from optimum performance of mammographic equipment, suboptimal clinical practice and training needs [1–].

An effective QA program should be practical to implement in a clinical setting. Adequate test equipment is necessary as well as standard methodology that provides ability to obtain the relevant objective, and subjective metrics of quality. Also, an effective QA program should be implementable at a low or moderate cost [4].

The testing of equipment should address the various critical stages of the imaging chain (acquisition, processing and display) and be implemented in a multidisciplinary team approach by trained staff (radiographer, medical physicist, radiologist) [, ].

In the past 20 years, several guidance documents have been developed nationally and internationally to promote quality in mammography. The scope of the guidance documents varies with some focused on technical aspects [4, 6–10], whereas others include also clinical aspects (e.g. epidemiology, interventional, pathology, surgery) [7, 11]. The developments in digital mammography over the last 10 years have resulted in developments in QA programmes and promoted the recommendation of new tests and procedures for quality control [].

This study aimed to identify, analyse and compare selected protocols currently available for QA in mammography, and to discuss their contribution to harmonise practices in mammography worldwide.

This review aims to provide useful guidance to countries aiming to implement (or further develop) a QA program in mammography.

Methods

An extensive search was performed to identify guidance documents and protocols for QA in mammography. Sources used included scientific databases, organisations of national healthcare systems (hospitals, regulatory bodies, etc.), international agencies (e.g. International Atomic Energy Agency [IAEA], International Commission on Radiological Protection [ICRP]), professional colleges (e.g. American College of Radiology [ACR], Royal College of Radiologists [RCR]) and scientific associations (e.g. Institution of Physics and Engineering in Medicine [IPEM], American Association of Physicists in Medicine [AAPM]). The search returned various documents published in English, French, Portuguese, Spanish, German, Italian, Swedish and Dutch. Only documents published in English or French were considered for comparability issues, as other languages were not mastered by the team.

The guidance documents identified were reviewed and compared for structure, editorial details, target staff profiles, technologies addressed and type of guidance (technical and clinical). Comparative tables are presented summarising the most relevant findings.

Results

Guidance documents for QA and quality control (QC) in mammography

Fourteen guidance documents for QA and QC in mammography published between 1991 and 2011 were identified (Table 1). Two are recommended by European bodies (European Reference Organisation for Quality Assured Breast Screening and Diagnostic Services [EUREF] and European Commission [EC]), three are internationally proposed by the IAEA and ten have national or regional scope (United States of America [USA], Canada, Australia, United Kingdom [UK], Ireland, Nordic) by governmental bodies, professional and/or scientific organisations.

Table 1

Guidance documents for quality assurance and quality control in mammography

Edition details
EditionPublisherCountryTitleShort titleaStatusScaleReferenceb
2011IAEAVariousQuality assurance programme for digital mammographyIAEA-DMIn useWorldwide[4]
2009IAEAVariousQuality assurance programme for screen film mammographyIAEA-SFIn useWorldwide[10]
2007IAEAVariousDosimetry in diagnostic radiology: an international code of practiceIAEA-DIn useWorldwide[13]
2006European Commission/EUREFVariousEuropean guidelines for quality assurance in breast cancer screening and diagnostic 4th editionECIn use/update in progressEurope[14]
1996EP European CommissionVariousEuropean protocol on dosimetry in mammographyEPIn useWorldwide[6]
1991/1994Swedish Radiation Protection InstituteDenmark, Finland, Iceland, Norway, SwedenReport on Nordic radiation protection Co-operation–Number 1-mammographyNordicSupersede by the European ProtocolNordic Countries[17]
2009NHSBSPUKCommissioning and routine testing of full field digital mammography systemsNHSBSP/UKIn useNational[18]
2009RANZCRAustralia and New ZealandMammography quality assurance program: guidelines for quality control testing for digital (CR & DR) mammographyRANZCRIn useNational[]
2008The National Cancer Screening Service BoardIrelandGuidelines for quality assurance in mammography screeningIrish ProtocolIn useNational[11]
2008NQMCBSAAustraliaBreast screen Australia Quality improvement programAustralian ProtocolIn useNational[20]
2005IPEMUKThe commissioning and routine testing of mammographic X-ray systemsIPEM/UKIn use/update in progressNational[8]
1999ACRUSAMammography quality control manual for radiologists, medical physicists and technologistsACRIn use/update in progressNational[9]
2006Ministère de la Santé-QuébecCanada (Quebéc)Manuel de contrôle de la qualité pour la mammographie et la biopsie guidée par stéréotaxie. Volume 2–physicien biomédicalCanadian ProtocolIn useRegional[21]
2001Ministère de la Santé-QuébecCanada (Quebéc)Manuel de contrôle de la qualité. Volume 1-technologue en radiologieCanadian ProtocolIn useRegional[22]

aShort title used as reference in this manuscript

bListed on the reference list

Guidance documents for QA and QC in mammography—scope and professional groups targeted

Four documents address both conventional and digital mammography. All documents are primarily focused on providing technical guidance. Three documents include both technical and clinical guidance.

Thirteen documents are targeted at medical physicists and nine also include guidance for radiographers and radiologists. One protocol is specifically targeted at radiographers (Table 2).

Table 2

Guidance documents for QA and QC in mammography: target staff profiles, technologies and guidance type (clinical and/or technical)

Target profileTechnologiesType of guidance
Short titleRadiographers/ radiologistsMedical physicistsOther healthcare profiles(epidemiologists, nurses; oncologists, surgeons)Screen filmDigitalTechnicalClinicalTolerances or recommended levels
IAEA-DMYYNANAYYNY
IAEA-SFYYNAYNAYYY
IAEA-DNYNYYYNY
ECYYYYYYYY
EPNAYNYYYNY
Nordic ProtocolNAYNAYNYNAY
NHSBSP/UKNAYNANAYYNY
RANZCRNAYNANAYYNY
Irish ProtocolYYYYYYYY
NQMCBSAYYYYYYNY
ACRYYNYNYYY
IPEM/UKYYNYSmall FieldYNY
Canadian ProtocolNYNYYYNY
Canadian ProtocolNYNYYYNY

Y yes, N no, NA not applicable/not available

The EC protocol, Australian and Irish protocols are broader in scope and include guidance to epidemiologists, nurses, oncologists and surgeons.

Performance testing of mammographic systems and breast dose assessment

Most documents (exceptions are the European Protocol [EP] and IAEA-D protocols) recommend performance testing of the three main stages of the mammography imaging chain (Tables 3, ,4,4, ,5,5, ,66 and and77):

  1. Image acquisition (the stage with more intensive testing)

  2. Image processing (following the manufacturers’ recommendations)

  3. Image display (includes monitor and printer testing)

Table 3

Recommended tests for QC for image detection and acquisition in mammographic systems (include testing the x-ray generation and image receptor)

InternationalNationalRegionalFrequency
X-ray generationTest typeTarget parameter to assessEC (2006)IAEA-SF (2009)IAEADM (2011)ACR (1999)UK/IPEM (2005)Ireland (2008)Australia (2008)RANZCR (2009)UK/NHSBSP (2009)Canada (Quebec) (2001/2006)Total (10)a
X-ray sourceFocal spot sizeYNNYYYNNNY5
Source-to-image distanceYNNNYYNNYY5
Alignment of X-ray field/image receptorYYYYYYYNYY9
Radiation leakageYYNNYYYNYN7
Tube outputYNYYYYYNYY8
Tube voltage and beam qualityReproducibility and accuracyYYYYYYYNYY9
Half value layer (HVL)YYYYYYYNYY9
AEC system performanceOptical density control setting: central value and difference per stepYYNYYNYNNY7
Back-up timer and security cut-offYNNNYYYNYN5
Short term reproducibilityYNNYYYYYYR8
Long term reproducibilityYNYNNNYNNR4
Object thickness and tube voltage compensationYYYYYYYYYY10
Correspondence between AEC sensorsYNNNNNYYYY5
CompressionCompression forceYNYYYNYYNY7
Compression plate alignmentYYYYNYYYNY8
Bucky and image receptorAnti-scatter gridGrid system factorYNNNYYNNNY4
Grid imagingYNNNYYNNNY4
Screen-filmInter cassette sensitivity and attenuation variation and optical density rangeYYNYYNYNNY6
Screen-film contactYYNYYNYNNY6
Image receptor response (digital)Response functionYNYNNYYNYR6
NoiseYNYNRYYNYY7
Missed tissue at chest wall edge (digitalYNYNNNYNYY5
Image receptor homogeneity and stability (digital)Image receptor homogeneityYNYNYYYYYY8
Detector element failure (DR systems)YNNNNYYYNN4
Uncorrected defective detector elements (DR systems)YNNNNYNNNN2
Inter-plate sensitivity variation (CR)YNYNNNYYNY5
Other sources of radiation (CR)YNNNNNNNNY2
Fading of latent image (CR)YNNNNNNNYY4
System propertiesDosimetryYYYYYYYNYY9
Image qualitySpatial resolutionYYYYYYYNYY9
Image contrastYYYYRYYYYY10
Threshold contrast visibilityYYNYRYYYYY9
Exposure timeYNNNYYYNNR5
Modulation transfer function and noise power spectrum (optional)YNYNYYYNYY7
ArtefactsYNNYYNYYYY7
Geometric distortion and artefacts evaluationYNYYNYYYYY8
Ghost image/erasure thoroughnessYNYNNYYNYY6

Y yes/exists, N no/not provided, R referred without detail on the methodology

aTotal refers to the total number of guidance documents that recommend the test

Table 4

Recommended test for QC: image processing stage in mammographic systems

InternationalNationalRegionalFrequency
Test typeTarget parameter to assessEC (2006)IAEA-SF (2009)IAEA-DM (2011)ACR (1999)UK/IPEM (2005)Ireland (2008)Australia (2008)RANZCR (2009)UK/NHSBSP (2009)Canada (Quebec) (2001/2006)Total (10)a
Film processingProcessor performanceTemperature verification and baselineYYNYYNYNNY6
Processing timeYYNNYNYNNY5
Film and processorSensitometryYYNYYNYNNY6
Daily performanceYYNYYNYNNY6
DarkroomArtefactsYYNYYNYNNY6
Light leakageYYNNYNNNNY4

Y yes/exists, N no/not provided

aTotal refers to the total number of guidance documents that recommend the test

Table 5

Recommended test for QC: image display stage in mammographic systems

InternationalNationalRegionalFrequency
Test typeTarget parameter/characteristic to assessEC (2006)IAEA-SF (2009)IAEA-DM (2011)ACR (1999)UK/IPEM (2005)Ireland (2008)Australia (2008)RANZCR (2009)UK/NHSBSP (2009)Canada (Quebec) (2001/2006)Total (10)a
Viewing conditionViewing boxLuminanceYYYYYNYYNY8
HomogeneityYYYYYNYYNY8
Ambient lightYYYYYYYNNY8
MonitorsAmbient light (CRT displays)YNYNRYYNYY7
Geometrical distortion (CRT displays)YNYNRYYYYY8
Contrast visibilityYNYNRYYYYY8
ResolutionYNYNRYYYYY8
Display artefactsYNYNRYYNYY7
Luminance rangeYNYNRYYYYY8
PrintersGreyscale display functionYNYNRYYNYY7
Luminance uniformityYNYNRYYYYY8
Geometrical distortionYYYNRNYYYY8
Contrast visibilityYYYNRNYYYY8
ResolutionYYNNRNYYYY7
Printer artefactsYYYNRNYYYY8
Optical density range (optional)YYYNRNYNYY7
Greyscale display functionYYYNRNYYYY8
Density uniformityYYYNRNYNYY7
OtherElectrical testsNNYYYYNNNR5
Mechanical testsNYYYYYYYNN7
Repeat image analysisNYYYYYYYNY8

Y yes/exists, N no/not provided, R referred without detail on the methodology

aTotal refers to the total number of guidance documents that recommend the test

Table 6

Recommended tests for dosimetry in mammography: technical aspects, dose estimation conversion factors and reference dose per projection

DosimetryDose
Protocol IDTechnical (with test objects/phantom)Clinical (with patient data)Test equipment (dosimeters)Quantities and unitsAGD estimationConversion factorsReference dose per projection
NordicStandard breast model 45 mm PMMA equivalent to average breast (50 % adipose + 50 % glandular)NAICESAK (mGy) and AGD or MGD (mGy)AGD = ESD × conversion factorsRosenstein (1985)≤0.8 mGy without grid
≤2 with grid (OD = 1)
EPStandard breast model 45 ± 5 mm PMMA10 patients with a compressed breast thickness between 40 to 60 mm for dose measurements on patients with TLDTLD or other dosimeter with a dynamic range 0.5–100 mGyESD (mGy), ESAK (mGy) and AGD or MGD (mGyAGD = ESAK × gPBDance (1990)2.3 mGy for a standard phantom
ACR(1) Blocks of PMMA (20, 40, 60 and 80 mm)NAICEntrance Exposure estimated from technical factors recorded and tube output (mR/mAs) and MGD (mGy)estimated for various thicknessDance (1990); Wu (1991) and Sobol (1997)≤3 mGy (42 mm compressed breast thickness)
(2) Standard breast Model 40-mm PMMA equivalent to 42 mm 50/50 mixture
IPEM(1) Blocks of PMMA (20-80 mm)AGD for a series of breast examinations on each mammography system periodically. Data collection: breast thickness; kVp; mAs. Accuracy: ±2 mmIC and electrometerESAK(K), AGD or MGD (mGy)D = KgcsDance (2000) for two age ranges 40–49 and 50-642 mGy (40 mm compressed breast thickness)
(2) Standard breast model 45-mm PMMA, equivalent breast thickness 55 mm with 30 % glandularity
EC(1) Blocks of PMMA (20-80 mm)AGD for a series of breast examinations on each mammography system. Data collection: breast thickness; kVp; mAs. Accuracy: ±2 mmESAK(K); AGD or MGD (mGy)D = KgcsDance (2000) for two age ranges 40–49 and 50-642.5 mGy for 45 mm
(2) Standard breast model 45-mm PMMA, equivalent breast thickness 53 mm
PQDCS(1) Standard breast model 40-mm PMMA, equivalent to 42 mm 50/50 mixtureNAIC and electrometerEntrance Surface Dose (ESD) (mR); AGD or MGD (mrad/R)AGD = ESD × conversion factorsStanton (1984); Wu (1991, 1994)≤3 mGy for breast thickness of 42 mm
IAEA-D45-mm thick PMMA phantom equivalent to ‘standard’ breast of thickness 50 mm and glandularity 50 %A range of 10–50 patients. Reference requires that the compressed breast is between 40 and 60 mm thick, with a mean value of 50 ± 5 mmIC or semiconductor Dosimeter or TLDIncident air kerma, (mGy); Entrance surface air kerma (mGy); AGD or MGD (mGy)DG = CDG50,Ki,PMMA sKi (standard breast)Dance (2000)
DG = cDG50,Ki. cDg,DG50.sKi (patient studies)
BC NBSPBlocks of PMMA (20–70 mm)Based on IPSM89 (2005) and the European Protocol in Dosimetry in Mammography (1996)IPSM89 (2005) and the European Protocol in Dosimetry in Mammography (1996)ESAK(K); AGD or MGD (mGy)D = KgcsDance (2000) for two age ranges 40–49 and 50–642.5 mGy for 45 mm
NQMCBSAStandard phantom 42 mm 50 % adipose, 50 % glandular breast (i.e. ACR accreditation phantom)≤2.0 mGy for exposures made using typical clinical settings
NHSBSP(1) Blocks of PMMA (20–70 mm)50 patients recommended with a compressed breast thickness of 55 ± 5 mm; 10 patients should be included in the dose sampleESAK(K); AGD or MGD (mGy)D = KgcsDance (2000)1 mGy for 20 mm PMMA;
(2) Standard breast model 45-mm PMMA, equivalent breast thickness 53 mm2.5 mGy for 45 mm PMMA;
6.5 mGy for 70 mm PMMA
IAEA-SFStandard breast model 45-mm PMMA, equivalent breast thickness 53 mmNAICESAK(K); AGD or MGD (mGy)D = KgcsDance (2000)Achievable: 2.0 mGy;
Acceptable: 2.5 mGy
IAEA-DMBlocks of PMMA (20, 45, 70 mm)NACalibrated detector at appropriated mammographic energiesIncident air kerma; AGD or MGD (mGy)D = KgcsDance (2000)1 mGy for 20 mm PMMA; 2.5
mGy for 45 mm PMMA; 6.5
mGy for 70 mm PMMA

NA not applicable, not available/not accessible

g Mo/Mo is the conversion factor of incident air KERMA (K) to MGD for Mo/0.030 mm Mo at 28 kVp, c is a factor that corrects for glandularity different from 50 % and s corrects for any anode/filter material combination, other than the Mo/Mo at 28 kVp only. cDG50,Ki,PMMA is the conversion coefficient to calculate the MGD for a 50 mm standard 50 % glandular breast from the air kerma for a 45 mm PMMA phantom. The coefficient cDGg,DG50 converts MGD for a 50 % glandular breast to the MGD for a breast of glandularity, g, and of the same thickness

Table 7

Overview of recommended tests for image quality assessment in mammography (for further detail on the methodology please refer to the original guidance documents)

TestGuidelineMaterialsComments and reference values
PositioningACRNASubjective evaluation using clinical criteria
EC Ireland,
Australia,
IAEA-SF
CompressionACRNASubjective evaluation using clinical criteria
ECCompression force device, foam rubber, tape measureDisplay force = measured force ± 20 N. Max motorised force = 130–200 N
IrelandSubjective evaluation using clinical criteria, Max motorised force = 200 N
Scales, compressible materialDisplay force = measured force ± 20 N. Max misalignment <5 mm for symmetric load.
AustraliaForce measuring device (e.g. analogue bathroom scales)Maximum motorised force between 150–200 N
IAEA-DMScales (e.g. analogue bathroom scales), foam, PMMA slabsTest motorised and manual compression. Max motorised force = 150 N-200 N. Max manual force = 300 N. Display thickness = slab thickness ± 8 mm
Contrast resolution and visualisation of breast lesions (in phantom)ACR, Canada, Australia, RANZCRACR accreditation phantom. Canada (alternative phantoms are RMI 156 or NA 18–220 or CIRS 015)CANADA: provides reference values for SFM and DR
ACR, Australia: RANZCR provides minimum threshold for visible details
ECCDMAMACR, Australia: RANZCR provides minimum threshold for visible details
UK/IPEM, UK/NHSBSPTOR (MAM) or CDMAMIPEM: recommend remedial values for low contrast detail detectability
NHSBSP: recommend acceptable and achievable values
IrelandCDMAM, PMMA blocks, CDCOM software, TORMAMRecommends achievable and minimum limits for automated analysis, following EC guidance
IAEA-DMNot specified (phantom should mimicking breast structures)IQ assessed for digital systems should be as good as, or better than, that expected with high quality SF mammography
Spatial resolutionUK/IPEMTOR(MAX)Recommends remedial value
ECMTF test tool, software to calculate MTFRecommends considering the acceptance value as reference
IrelandMTF edge phantom, ImageJ softwareRecommended considering the acceptance value as reference
Canada2 test patterns; PMMAMinimum threshold for broad
ACRBar patternRecommended values for perpendicular and parallel MTF
IAEA-DMMTF test tool-metal foil with straight edges (e.g. copper, stainless steel, brass, etc.)Acceptable values are presented for all available manufactures.
PMMA to support the MTF test tool and MTF software
UK/NHSBSPBar patternRecommended values presented according manufactures (should be at least <70 % of Nyquist frequency of the detector)
UK/IPEMTORMAXRemedial and suspension values provided
NoiseECNPS phantom (standard test block), optional software to calculate NPSManufacturer’s specifications
IrelandStandard PMMA test block dosimeterConsider acceptance values as reference.
CanadaPMMA blocks with various thickness and 0.1 mm of aluminiumAcceptable values are presented for all types of breast tissue.
ACRNASubjective evaluation using clinical criteria
Signal-to-noise ratio (SNR)UK/NHSBSPPMMA blocksRecommended minimum SNR variation
IAEA-DMPMMA blocks and aluminium (or PMMA contrast object)Acceptable values are presented for all available manufactures.
ArtefactsAll ProtocolsPMMA blocks or aluminium plateSubjective using established criteria
LabellingACR and IAEA-SFNAPatient and facility name, projection view, side, cassette number, mammography unit used and the initials of technologists who performed the examination.

NA not available/not applicable

Testing the image acquisition system

X-ray production system

All documents recommend testing the generator and X-ray source, the Automatic Exposure Control (AEC) and the breast compression systems. Recommended tests include (1) alignment of X-ray field/light field/image receptor area, (2) repeatability and accuracy of tube output exposure, (3) half-value layer (HVL), (4) AEC response versus breast thickness and tube voltage compensation and (5) alignment of the compression plate.

Breast dose

Table 6 reviews the guidance for dosimetry testing. All guidance documents provide recommendations for assessment of breast dose and two (i.e. EP; IAEA-D) are dedicated to this topic and include detailed methodology.

The mean glandular dose (MGD) is the recommended parameter for assessing the risk of radiation-induced cancer in mammography. Proposed methodologies for MGD assessment (reviewed in Table 6) include:

  • Measurements on patients using a thermoluminescent dosimeter (TLD) (a minimum of ten patients is recommended)

  • Dose estimation from clinical exposure data (10–60 patients recommended)

  • Dose estimation using test objects/phantoms (the entrance surface air kerma without backscatter (ESAK) should be measured and multiplied by a conversion factor, which compensates for X-ray beam quality, breast thickness and composition (percentage glandularity)

The ESAK is required to calculate MGD and can be measured with a calibrated ionisation chamber (IC), semiconductor dosimeter or TLD material (Table 6). If measurements include the effect of backscatter (e.g. TLDs), an appropriate correction factor should be applied [6]. The recommended phantoms to perform dosimetry testing vary between the protocols (Table 6).

Also, the various protocols propose different methodologies to measure the required data for MGD calculation (e.g. the ACR, Canadian and UK/IPEM propose measurements to be performed at 40 mm from the chest wall edge, whereas the EP recommends 60 mm).

Since the conversion factors used to estimate the MGD from the incident air kerma depend on the X-ray beam quality, it is necessary to keep track of the target/filter (T/F) combination and tube voltage used in the experimental procedure, as well as the half-value layer (HVL) of the X-ray beam.

The EC protocol proposes conversion factors by Dance et al. (1990) and Dance et al. (2000), whereas the ACR uses factors by Dance et al. (1990); Wu et al. (1991) and Sobol et al. (1997) [9], the Canadian protocol uses Stanton et al. (1984) and Wu et al. (1991) [13] and the Nordic protocol propose conversion of Rosenstein et al. (1985) [14].

Image receptor

The most frequently recommended tests for digital mammography include (1) the system’s response function, (2) image noise, (3) missed tissue at chest wall edge, (4) signal homogeneity and (5) image artefacts (Table 3).

Some protocols propose specific tests for CR systems, namely (1) inter-plate sensitivity variations, (2) image artefacts, (3) evaluation of the influence of secondary sources of radiation and (4) fading of the latent image signal. Guidance is also included for testing the scanning mechanism of the CR plate and the efficiency of the erasure cycle. Specific tests for SFM are proposed in the older protocols (EC protocol, UK/IPEM, Canada, IAEA-SF) (Table 3).

Quality of the acquired image

Table 7 summarises eight groups of tests for assessment of IQ recommended in the guidance documents reviewed. The tests address technical and clinical IQ criteria using test objects and phantoms.

Phantoms and test objects

The recommended phantoms to produce the images for low contrast IQ assessment vary between the protocols. CDMAM is frequently recommended in Europe (EC PROTOCOL, UK/IPEM, UK/NHSBSP and Ireland) whereas the ACR phantom is the standard in use in the US and Canada.

IAEA does not recommend a particular phantom but highlights the importance of using a phantom that contains structures able to mimic those typically found in the breast.

For high-contrast IQ assessment the MTF is the key recommended parameter. The MTF bar pattern method is more straightforward to implement than the calculation of the MTF using the edge phantom.

Image processing

Image quality is affected by the processing stage. For SF systems the guidance reviewed recommends testing the performance of the chemical processor (e.g. time, temperature, base and fog levels). The EC guidelines highlight the importance of testing image processing. For digital mammography systems, the manufacturer’s guidance should be followed because image-processing algorithms are manufacturer-specific.

Artefacts

Artefact analysis is an important test recommended in all guidelines reviewed. For SFM it focuses on artefacts resulting from the chemical processing or from the degradation of the screen-film detector characteristics. In digital systems, artefact analysis is focused on investigating problems originating in the image acquisition system and during plate handling and processing (CR systems). Testing includes assessment originated by printing devices (e.g. laser printers). A clinical evaluation protocol (type testing) is available in the EUREF website (www.euref.org) and repeated/rejected analysis is recommended on the IAEA-DM protocol.

Image display

QA guidelines for testing image display systems (Table 5) refer to the AAPM report Task Group 18 [15] for testing electronic monitors and printers. The testing of light boxes is included in the QC guidance for SFM systems [11, , ].

Test frequency and reference (or limiting) values

All guidance documents provide recommendations on the frequency of the tests (Table 1). A number of tests are recommended at acceptance only. Others should be performed periodically (yearly, 6-monthly, monthly, weekly or daily). Intermediate testing should be performed when necessary (e.g. following major equipment repair).

The guidance documents also provide reference values and pass/fail criteria. These originate from manufacturer recommendations, expert knowledge, survey QC data, baseline values and national policies (e.g. existing dose reference levels). A critical aspect is to ascertain when the measured (including uncertainties) is substantially lower than the reference/limiting value. As an example, UK/IPEM guidance recommends that measured values for the relevant performance indicators not exceed one-third of the range proposed for the limiting or remedial values.

Discussion

The study showed that in the last 20 years comprehensive guidance documents have been developed worldwide to support the implementation of QA in mammography.

Target technology

The IAEA-DM protocol (edited 2011) is the most up-to-date guidance and is dedicated to digital mammography. The UK/IPEM, EC, IAEA-SF and ACR protocols are well-established documents originally developed for SFM that have been adopted in many countries worldwide. The EC guidelines were updated and an addendum on digital mammography was included [1, 17]. At the date of submission of this paper, an updated version of the ACR protocol is known to be in progress to include guidance specifically targeted at digital mammography. Also, as per information available on the EUREF website, a revised edition (5th) of the EC Guidelines is in development [18].

As new techniques in digital mammography are becoming widespread, it is expected that revised versions of the existing protocols will be produced, including guidance for testing the capabilities of state of the art technology (e.g. tomosynthesis, dual-energy contrast-enhanced digital subtraction mammography).

Professional targets

The EC and Irish protocols are wider in scope and may be useful to a broader range of healthcare professionals. Other protocols focus on dosimetry and IQ assessment and are targeted at medical physicists, radiographers and breast radiologists. Hendrick et al. [] showed that the profile of staff performing QA testing differs between countries. Often, radiographers are in charge of the most frequent tests (daily, weekly), whereas medical physicists perform in-depth technical performance assessment (e.g. collimation, X-ray tube output, and AEC testing). In Japan, radiographers perform all QC testing, whereas in Finland, Iceland and Hungary the service engineers tend to be in charge of the QC tasks. As highlighted in the IAEA-DM protocol a critical aspect is that QC testing is delegated to staff holding appropriate expertise and training [4].

QA testing of mammographic systems and breast dose assessment

Image detection and acquisition system

All protocols reviewed recommend testing the X-ray source (tube voltage and HVL) and the AEC system. AEC testing is one of the most important procedures due to its direct impact on IQ and breast dose []. It should consider the effects of variations in object/attenuator thickness and radiation beam quality. Hendrick et al. [] compared QC practices in 22 countries (affiliated with the International Breast Cancer Screening Network) and concluded that this test that was performed in all countries.

Breast dose

The recommended methodologies for breast dose estimation vary (Table 6). Measurements using test objects and breast phantoms are frequently recommended and more practical to implement than measurements based on TLD techniques.

Dose assessment with a standard test object/phantom facilitates the comparison of different mammographic techniques and the investigation of the impact of technical settings on breast dose [20, 21]. Clinical dose assessment (using clinical exposure data) provide valuable information on the clinical practice and takes into account the influence of breast thickness and composition on dose [6].

Variations in dosimetry techniques in mammography may prevent a robust comparison of breast dose in mammography between countries and between radiology departments [22–].

Dance et al. [] also highlighted that national protocols adopt different phantoms, optical densities, measurement points and conversion factors, which make it difficult to compare the doses estimated with different protocols.

Hemdal et al. [] measured the impact of variations in experimental technique (e.g. positioning of the dosimeter, compression plate in or out of the beam) on MGD values and found noticeable variations.

When the European protocol was used, the value of the MGD increased by 5 ± 2 % (total variation 0–9 %) at clinical settings and by 9 ± 3 % (4–17 %) compared with the use of the Nordic protocol [21]. The same authors also compared measurements with different dosimeters (ionisation chambers vs solid-state detectors) []. They concluded that HVL measurements can be performed accurately with a sensitive solid-state detector and a collimated radiation field, correcting for energy dependence.

This review showed variations in the conversion factors used in the estimation of breast dose (to account for X-ray spectrum characteristics and breast composition) amongst the guidance documents.

Zoetelief and Jansen [25] compared protocols for dosimetry in mammography and concluded that the use of different radiation transport codes and different spectra could cause differences in the conversion factor g by up to about 7 %. They also showed that inclusion of the compression plate in the beam results in a 4.5 ± 1.5 % smaller g value for the same HVL. Also, when breast thickness increases from 2 cm to 8 cm, the g value decreases by a factor of 4.

Tsai et al. [15] showed that the MGD calculated using Dance’s method is 9–21 % higher than that using Wu’s method. Jamal et al. [] also compared MGD per film considering eight different studies using different protocols and conversion factors and found MGD values with noticeable variations for a same breast thickness.

The MGD critically depends on the X-ray spectrum generated by the TF combination and tube voltage used. Modern digital mammography systems offer innovative TF combinations (e.g. W/Ag, W/Al) and new conversion factors have been developed [, , ]. The protocols reviewed do not yet include the most recent published data.

Quality of the acquired image

All guidance reviewed recommends performing low-contrast threshold detection testing, breast lesion visualisation (e.g. simulated in phantoms) and artefact analysis. Compression force, image noise and spatial resolution testing are also recommended with variations in the proposed methods and test materials.

The EC protocol recommends assessment of image quality of digital mammographic systems using images produced with a specific low-contrast-detail test object (CDMAM) [, ], which is a costly tool not readily available in all imaging departments. The UK/IPEM and ACR protocols recommend alternative test objects to CDMAM, namely TOR (MAM) and the ACR accreditation phantom, respectively. The choice of a suitable IQ phantom should take into consideration the technology to be tested (screen-film of digital). Huda et al. [] examined the effectiveness of the ACR phantom to assess image quality in digital mammography and concluded that it is unsatisfactory due to an inappropriate range and sensitivity to characterise simulated breast lesions.

Variations in recommended test objects originate differences in reference/tolerance values (Table 7). The number and type of recommended IQ tests varied (between 1 and 9) as well as the recommended methodologies. Examples of methods found in the guidance for rating IQ include absolute, or relative, scales (e.g. five-step scale, 1 (worst) to 5 (best); two-step scale with 1 (criterion was fulfilled) and 0 (criterion was not fulfilled); four-step scale as designed by PGMI scale (perfect, good, moderate and inadequate).

The guidance documents reviewed do not include recommendations on observer training for IQ assessment. This could be useful to reduce inter-observer variability in the assessment of IQ.

Also, breast compression force is influenced by breast thickness and composition. However, no recommendations are provided to promote the optimisation of compression force according to individual characteristics of the breast (compressibility, composition and thickness) [31, 32]. Maximum values for compression in mammography are recommended [7, 11, 33, 34].

The composition of breast tissue is an important issue because increased breast density is known as a risk factor for developing breast cancer []. Nevertheless, in the reviewed QA guidance for IQ assessment breast density was not used as a standard.

In 2011, an addendum to the EC protocol, containing guidance for clinical evaluation of mammographic images, was published promoting harmonisation in image quality analysis. Clinical IQ assessment conducted by experienced radiologists is important because it takes into account the effects of image processing which may directly affect the visibility of relevant features and the subsequent diagnostic outcome [36].

Image display/presentation and processing

All protocols including guidance for digital mammography recommend testing monitor displays and printers (Table 2). No recommendations are provided regarding the format for delivering mammography examinations/images to the patient and practices vary—some healthcare institutions deliver the examination in hardcopy (paper or film), whereas others provide digital images on CD.

Despite the potential critical impact of image processing in the quality of the final image the testing of image processing tools in still at early states (compared with testing of hardware). Most protocols for testing digital mammography systems recommend testing based on raw image data and do not include recommendations for testing post-processing algorithms used in clinical images. Establishing testing protocols for post-processing tools in digital mammography is a challenging task as processing tends to be manufacturer-specific and frequently manufacturers are reluctant to reveal details of the post-processing algorithms incorporated in their systems. A recent publication [37] addresses briefly the challenges of testing post-processing in tomosynthesis. Work is in process in collaboration with the manufacturers of digital breast tomosynthesis imaging systems to identify a method for technical evaluation incorporating the clinically used tomosynthesis reconstruction technique. Testing post-processing tools in DM is a topic that requires further research.

Conclusion

In this study the published guidance for QA in mammography was reviewed. The recommended performance tests for image acquisition, processing and display systems were discussed and compared. Noticeable variations exist in the proposed methods, test objects and phantoms. Also, reference values and acceptability criteria vary between protocols, which raises the question of whether it would be possible to have a mammography system complying with a test procedure and acceptability criteria, whereas using another test procedure the system would fail.

European Guidelines For Quality Assurance In Mammography Screening 2nd Edition 2017

Harmonisation and best practices in mammography would benefit from more detailed guidance on the experimental methods for QC testing and recommendations of more affordable test equipment and materials that could be acquired by the majority of X-ray departments.

When a recommended protocol cannot be implemented in full, a selection of tests may be adequate. Selection criteria should take into consideration resources and expertise available and the relevance of the tests to local practices. It should be noted to highlight the value of testing the AEC system, which is a simple procedure to implement that provides valuable information on the overall performance of the mammography system.

A key factor to promote the success of a QA program for mammography is teamwork and the collaboration of all key staff (e.g. radiographers, radiologists, medical physicists and healthcare managers). Training and continuous feedback mechanisms are essential to improve the testing procedures and strengthen the outcomes of the program.

Also, the use of professional networks and special interest groups to exchange experiences with colleagues worldwide can be of great value in the initial phases of implementation of a mammography QA program.

European guidelines for quality assurance in mammography screening 2nd edition edition

Acknowledgments

The authors express their gratitude to Edward Hendrick, Christopher Lawinski, João Alves and Mário Oliveira for their valuable feedback for improving scientific content and English writing is this manuscript.

Conflict of interest

The authors declare no conflicts of interest. No funding was received for this work.

Contributor Information

Cláudia Reis, Phone: +351-91-8690239, Email: tp.lpi.lsetse@sier.aidualc, Email: tp.pcu.aobsil.ef@sier.aidualc.

Ana Pascoal, Email: moc.liamg@47laocsapa.

Taxiarchis Sakellaris, Email: moc.liamg@rallekasm.

Manthos Koutalonis, Email: moc.liamg@sinolatuok.m.

References

1. Joy JE, Penhoet EE, Petitti DB (2005) Saving Women’s Lives-Strategies for Improving Breast Cancer Detection and Disgnosis, The National Academies
2. Klabunde C, Bouchard F, Taplin S, Scharpantgen A, Ballard-Barbash R. Quality assurance for screening mammography: an international comparison. J Epidemiol Community Health. 2001;55(3):204–212. doi: 10.1136/jech.55.3.204.[PMC free article] [PubMed] [CrossRef] [Google Scholar]
3. Li Y, Poulos A, Mclean D, Rickard M. A review of methods of clinical image quality evaluation in mammography. Eur J Radiol. 2010;74:122–131. doi: 10.1016/j.ejrad.2009.04.069. [PubMed] [CrossRef] [Google Scholar]
4. International Atomic Energy Agency . Quality assurance programme for digital mammography. Vienna: International Atomic Energy Agency; 2011. [Google Scholar]
5. Hendrick RE, Klabunde C, Grivegnee A, Pou G, Ballard-Barbash R. Technical quality control practices in mammography screening programs in 22 countries. Int J Qual Health Care. 2002;14(3):219–226. doi: 10.1093/oxfordjournals.intqhc.a002613. [PubMed] [CrossRef] [Google Scholar]
6. European Comission . European protocol on Dosimetry in mammography. Luxembourg: European Commission; 1996. [Google Scholar]
7. European Commission . European communities/European reference organisation for quality assured breast screening and diagnostic services European guidelines for quality assurance in breast cancer screening and diagnosis. 4. Brussels: European Communities; 2006. [Google Scholar]
8. National Breast Screening Quality Assurance (2005) The Commissioning and Routine Testing of Mammographic X-Ray Systems, York, Institute of Physics and Engineering in Medicine
9. American College of Radiology (1999) Mammography Quality Control Manual. American College of Radiology, Reston
10. International Atomic Energy Agency . Quality assurance programme for screen film mammography, internatio. Vienna: International Atomic Energy Agency; 2009. [Google Scholar]
11. The National Cancer Screening Service (2008) Guidelines for Quality Assurance in Mammography Screening, 3rd edn. Members of the Quality Assurance Committee/The National Cancer Screening Service Board, Dublin
12. Marshall NW, Mackenzie AHI. Quality control measurements for digital x-ray detectors. Phys Med Biol. 2011;56:979–999. doi: 10.1088/0031-9155/56/4/007. [PubMed] [CrossRef] [Google Scholar]
13. Francine N, Richard T. Manuel de contrôle de la qualité pour la mammographie et la biopsie guidée par stéréotaxie. Quebec: Ministère de la Santé et des Services Sociaux; 2006. [Google Scholar]
14. Nordisk Rapportserie om Stralskyddsfragor (1994) Mammography_Nordic_protocol_Selected_pages.pdf, Stockolm
15. Tsai HY, Chong NS, Ho YJ, Tyan YS. Evaluation of depth dose and glandular dose for digital mammography. Radiat Meas. 2010;45(3–6):726–728. doi: 10.1016/j.radmeas.2010.02.005. [CrossRef] [Google Scholar]
16. Dance DR, Young KC, Van Engen RE. Estimation of mean glandular dose for breast tomosynthesis: factors for use with the UK, European and IAEA breast dosimetry protocols. Phys Med Biol. 2011;56(2):453–471. doi: 10.1088/0031-9155/56/2/011. [PubMed] [CrossRef] [Google Scholar]
17. International Cancer Screening Network (2011) Other Characteristics of Breast Cancer Screening Programs in 27 ICSN Countries, 2007–2008. National Cancer Institute, Bethesda
18. European Communities/European Reference Organisation for Quality Assured Breast Screening and Diagnostic Services, European Commission (2006) European guidelines for quality assurance in breast cancer screening and diagnosis. Http://www.euref.org/european-guidelines 19(4):1–432
19. Meeson S, Young KC, Hollaway PB, Wallis MG. Procedure for quantitatively assessing automatic exposure control in mammography: a study of the GE Senographe 600 TS. Br J Radiol. 2001;74(883):615–620. doi: 10.1259/bjr.74.883.740615. [PubMed] [CrossRef] [Google Scholar]
20. Zoetelief J, Fitzgerald M, Leitz W, Sabel M. Dosimetric methods for and influence of exposure parameters on the establishment of reference doses in mammography. Radiat Prot Dosim. 1998;80:175–180. doi: 10.1093/oxfordjournals.rpd.a032499. [CrossRef] [Google Scholar]
21. Hemdal B, Bengtsson G, Leitz W, Andersson I, Mattson S. Comparison of the european and nordic protocols on dosimetry in mammography involving a standard phantom. Radiat Prot Dosim. 2000;90:149–154. doi: 10.1093/oxfordjournals.rpd.a033106. [CrossRef] [Google Scholar]
22. Hemdal B (2009) Evaluation of absorbed dose and image quality in mammography. Lund University, Malmö
23. Hemdal B, Herrnsdorf L, Andersson I, Bengtsson G, Heddson B, Olsson M. Average glandular dose in routine mammography screening using a sectra MicroDose mammography unit. Radiat Prot Dosim. 2005;114(1–3):436–443. doi: 10.1093/rpd/nch556. [PubMed] [CrossRef] [Google Scholar]
24. Jamal N, K-N NG, Mclean D. A study of mean glandular dose during diagnostic mammography in Malaysia and some of the factors affecting it. Br J Radiol. 2003;76(904):238–245. doi: 10.1259/bjr/66428508. [PubMed] [CrossRef] [Google Scholar]
25. Zoetelief J, Jasen JTM. Calculation of Air kerma to average glandular tissue dose conversion factors for mammography. Radiat Prot Dosim. 1995;57(1–4):397–400.[Google Scholar]
26. Dance DR, Skinner CL, Young KC, Beckett JR, Kotre CJ. Additional factors for the estimation of mean glandular breast dose using the UK mammography dosimetry protocol. Phys Med Biol. 2000;45(11):3225–3240. doi: 10.1088/0031-9155/45/11/308. [PubMed] [CrossRef] [Google Scholar]
27. Koutalonis M, Delis H, Spyrou G, Costaridou L, Tzanakos G, Panayiotakis G. Monte Carlo generated conversion factors for the estimation of average glandular dose in contact and magnification mammography. Phys Med Biol. 2006;51(21):5539–5548. doi: 10.1088/0031-9155/51/21/010. [PubMed] [CrossRef] [Google Scholar]
28. Thierens H, Bosmans H, Buls N, De Hauwere A, Bacher K, Jacobs J, Clerinx P. Typetesting of physical characteristics of digital mammography systems for screening within the Flemish breast cancer screening programme. Eur J Radiol. 2009;70(3):539–548. doi: 10.1016/j.ejrad.2008.01.046. [PubMed] [CrossRef] [Google Scholar]
29. Marshall NW. A comparison between objective and subjective image quality measurements for a full field digital mammography system. Phys Med Biol. 2006;51(10):2441–2463. doi: 10.1088/0031-9155/51/10/006. [PubMed] [CrossRef] [Google Scholar]
30. Huda W, Sajewicz AM, Ogden KM, Scalzetti EM, Dance D. How good is the ACR accreditation phantom for assessing image quality in digital mammography? Acad Radiol. 2002;9:764–772. doi: 10.1016/S1076-6332(03)80345-8. [PubMed] [CrossRef] [Google Scholar]
31. Poulos A, McLean D. The application of breast compression in mammography: a new perspective. Radiography. 2004;10(2):131–137. doi: 10.1016/j.radi.2004.02.012. [CrossRef] [Google Scholar]
32. O’Leary D, Teape A, Hammond J, Rainford L, Grant T (2011) Compression force recommendations in mammography must be linked to image quality. Proceedings of the European Congress of Radiology 2011, Vienna, pp 1–19
33. National Quality Management Committee (2008) Australia B BreastScreen Australia Quality National Accreditation Standards-Quality Improvement Program, Australia, National Quality Management Committee of BreastScreen Australia
European guidelines for quality assurance in mammography screening 2nd edition edition
34. Royal Australian and New Zealand College of Radiologists Mammography Quality Assurance Program (2009) Guidelines for quality control testing fpr digital (CR & DR) mammography. Royal Australian and New Zealand College of Radiologists, Sidney
35. Vachon CM, Pankratz VS, Scott CG, Maloney SD, Ghosh K, Brandt KR, Milanese T, Carston MJ, Sellers TA. Longitudinal trends in mammographic percent density and breast cancer risk. Cancer Epidemiol Biomark Prev/Am Assoc Cancer Res. 2007;16(5):921–928. doi: 10.1158/1055-9965.EPI-06-1047. [PubMed] [CrossRef] [Google Scholar]
36. Bosmans H, van Engen R, Heid P, Lazzari B, Schopphoven S, Thijssen M, Young K. EUREF type test protocol. Nijmegen: European Reference Organisation for Quality Assured Breast Screening and Diagnostic Services; 2011. [Google Scholar]
37. Engen R van, Bosmans H, Bouwman R, Dance D, Heid P, Lazzari B, Marshall N, Schopphoven S, Strudley C et al (2013) Protocol for the Quality Control of the Physical and Technical Aspects of Digital Breast Tomosynthesis Systems—Draft Version 0.10. EUREF, Nijmegen
Articles from Insights into Imaging are provided here courtesy of Springer