Written evidence response to the consultation regarding the SMS Investigation into Google Search:
Data Portability CR
a. Do you agree with the aim of the Data Portability CR and how we propose to implement the Data Portability CR to meet that aim?
The Consumer Choice Center agrees with the high-level aim of the proposed Data Portability Conduct Requirement (CR): to reduce switching costs, empower consumers, and support effective competition through enhanced consumer choice.
However, we have reservations as to whether the proposed implementation, as currently designed, satisfies the CMA’s statutory requirements of proportionality and necessity.
Drawing on themes consistently by our responses, we emphasise that:
Data portability should be a consumer-enabling mechanism, not a substitute for demonstrated consumer demand or a tool for indirect market reconfiguration.
The CMA should clearly articulate the theory of harm the CR is intended to address, including evidence that existing portability rights or market-led solutions are insufficient.
The CR should be designed to minimise interference with legitimate product design choices and ongoing investment incentives.
Without a clearer demonstration that the proposed obligations are the least intrusive means of achieving the stated objective, there is a risk the CR may exceed what is required to protect consumers.
b. Do you consider the proposed Data Portability CR would result in the potential benefits we have identified (for example, value and innovation)?
The Consumer Choice Center recognises that data portability can generate consumer benefits under certain conditions. However, we caution against assuming that mandated portability will necessarily lead to increased value or innovation.
Innovation in digital markets is primarily driven by product differentiation, quality, and user experience, rather than by mandated data access.
The CMA has not yet demonstrated that limited data portability is a binding constraint on consumer choice or entry.
There is a risk that overly prescriptive portability requirements could dampen dynamic competition by reducing incentives to invest in proprietary systems and integrated services.
We encourage the CMA to more clearly explain:
● How the proposed CR is expected to translate into tangible consumer benefits,
● How those benefits will be distinguished from broader market trends,
● How potential negative effects on innovation and investment will be mitigated,
● And how the CMA intends to monitor the effectiveness of the CRs after the 5 year period.
c. Do you agree with our proposal to use Interpretative Notes to clarify the conduct we expect from Google to comply with the Data Portability CR?
The Consumer Choice Center agrees in principle with the use of Interpretative Notes to clarify expectations around compliance, particularly given the technical complexity of data portability.
That said, from a legal certainty and proportionality perspective: Interpretative Notes should remain clearly non-binding and should not expand the scope of the CR beyond its formal wording.
They should not function as de facto secondary legislation or impose obligations that have not been subject to full consultation and assessment.
Firms should retain flexibility in how outcomes are achieved, provided consumer-facing objectives are met.
This approach would be consistent with the CMA’s obligation to ensure that Conduct Requirements are targeted and adaptable
d. Do you agree with the content of the Interpretative Notes? Are they sufficiently clear and comprehensive? Do they cover the right issues? Are there any gaps?
While the Interpretative Notes provide a useful starting point, the Consumer Choice Center does not consider them sufficiently complete or precise at this stage.
In particular, we identify gaps relating to:
The interaction between data portability, user consent, and data security. Practical and technical feasibility constraints, including the costs and risks of continuous or near-real-time portability.
The absence of clear criteria for assessing when portability is genuinely beneficial to consumers rather than merely available.
We recommend that the CMA:
Place greater emphasis on consumer outcomes, not procedural compliance.
Explicitly acknowledge trade-offs between portability, privacy, and cybersecurity.
Commit to iterative revision of the Notes as evidence and market conditions evolve.
e. Do you agree with our proposals for compliance reporting and for monitoring the effectiveness of the proposed intervention? Have we identified the right metrics?
The Consumer Choice Center supports proportionate compliance reporting and ongoing monitoring but cautions against reporting frameworks that prioritise administrative metrics over real-world consumer impact. Consistent with our previous consultation responses, we recommend that:
● Effectiveness is assessed primarily through consumer uptake, awareness, and satisfaction, rather than the sheer availability or volume of data transfers.
● The CMA establishes clear baseline metrics against which progress will be measured.
● Reporting requirements remain scalable and responsive to evidence of harm.
● We also encourage the CMA to monitor potential unintended consequences, including:
● Increased risks of fraud or data misuse,
● Higher compliance costs being passed on to consumers,
● Reduced incentives for service improvement.
Publisher CR
6.2 in relation to ensuring publishers have sufficient choice:
(a) As noted in paragraph 4.13, we would welcome further evidence on the benefits and risks of Google providing separate controls of training and grounding outside general search.
(b) As noted in paragraph 4.25, we would like to receive further evidence on the benefits and risks of Google providing Page level controls outside of general search ( i.e. for Google extended)
a. The Consumer Choice Center supports efforts to improve transparency and fairness for publishers where this demonstrably enhances consumer welfare. However, any Publisher Conduct Requirement (CR) should remain proportionate, technologically neutral, and carefully targeted at clearly evidenced harms. Overly prescriptive obligations risk reducing innovation in search and generative AI features, ultimately harming users through lower-quality products and slower iteration.
Providing separate controls for training and grounding may enhance publisher autonomy and clarity. However, risks include:
● Increased operational complexity that may reduce product quality or slow model
improvement.
● Fragmentation of model training datasets, potentially reducing factual robustness.
● Strategic blocking behaviour that benefits larger publishers at the expense of smaller ones.
If implemented, controls should be simple, interoperable, and consistent with existing technical standards. The CMA should ensure that any requirement is proportionate and does not undermine model quality to the detriment of consumers.
b. Page-level controls could improve precision and reduce over-blocking. However:
● They may impose compliance burdens on smaller publishers.
● Excessive granularity may create instability in indexing and AI feature performance.
The CMA should ensure such controls are optional, scalable, and supported by clear technical guidance.
6.3 in relation to greater transparency for publishers:
(a) As in paragraph 4.45 we would like to receive further evidence on the benefits and risks of providing performance and engagement information on a ‘per future’ basis within general search.
(b) As noted in paragraph 4.50, we would welcome views on what would be the most effective way(s) for Google to provide publishers with information to enable them to understand the quality of clicks referred from search generative AI features.
a. Providing performance data on a per-feature basis could improve publisher understanding of traffic flows and business impact. However:
● Disclosure should not enable reverse engineering of ranking systems.
● Data granularity must be carefully calibrated to avoid gaming or manipulation.
● Compliance costs should be proportionate to demonstrated need.
The primary objective should remain improving publisher understanding without undermining search integrity.
b. The most effective approach would include:
● Aggregate engagement metrics (bounce rate, dwell time, conversion proxies).
● Clear differentiation between AI-surfaced links and traditional search referrals.
● Periodic explanatory reporting rather than real-time competitive data.
Any solution should focus on enabling publishers to assess value, while protecting user privacy and ranking system resilience.
6.4. In relation to attribution:
(a) As noted in paragraph 4.67, we welcome views on whether a mechanism for publishers to more easily communicate the reasons for blocking content from appearing Google search generative AI features would enhance the effectiveness of our proposed publisher CR whilst ensuring it remains proportionate.
(b) We welcome further examples of information and metrics that help explain how Search Content is attributed in Google’s search generative AI features and the factuality of those features, and views on how these data would be best disseminated.
c)We welcome further views on the extent to which our proposed publisher CR can be expected to result in the identified consumer benefits, including ensuring that users are able to assess and trust content they read on the web.
a. A standardised communication channel could improve regulatory oversight and transparency. However, it should not create incentives for strategic or coordinated blocking behaviour. Any mechanism must be lightweight and avoid creating de facto approval systems.
b. Useful metrics may include:
● Frequency of citation by domain category.
● Prominence indicators within generative outputs.
● High-level factuality evaluation methodologies.
Dissemination should occur via periodic transparency reports rather than continuous dashboards to avoid gaming risks.
c. The proposed Publisher CR may enhance trust if it improves clarity of attribution and source visibility. However, benefits depend on careful calibration. Excessive intervention risks degrading AI quality, which would undermine consumer trust rather than enhance it.