Facebook behaves like a “digital gangster,” has deliberately broken privacy and competition law, and should be subject to statutory regulation urgently, according to a U.K. parliamentary report.

The final report of the Digital, Culture, Media and Sport select committee’s 18-month investigation into disinformation and fake news says that “the dominance of a handful of powerful tech companies has resulted in their behaving as if they were monopolies in their specific area,” with “Facebook, in particular, [being] unwilling to be accountable to regulators around the world.”

The head of the committee, Jeremy Wright, reportedly flew to Facebook’s headquarters in California earlier this week and was scheduled to meet with CEO Mark Zuckerberg to discuss the findings on Thursday. Zuckerberg has previously refused to appear before Parliament to answer for Facebook.

In a statement, Facebook said that it rejects all claims that it breached data protection and competition laws.

The report also accused Facebook of purposefully obstructing its inquiry and failing to tackle attempts by Russia to manipulate elections.

Members of Parliament (MPs) singled out Zuckerberg for special criticism. The report said that Zuckerberg had “shown contempt” for failing to attend any of the committee’s hearings, as well as those of the “International Grand Committee” made up of members from nine countries from around the world to tackle data privacy and disinformation.

The committee criticised Facebook for failing to address any of the failings and concerns that the committee—as well as the UK’s data privacy watchdog, the Information Commissioner’s Office (ICO)—had previously flagged up.

In oral evidence, Elizabeth Denham, head of the ICO, said that Facebook does not view the rulings from other regulators such as the U.S. Federal Trade Commission, the federal privacy commissioner in Canada or the Irish data protection regulator “as anything more than advice.”

The report also slammed Zuckerberg’s lack of awareness about what Cambridge Analytica was up to with users’ information as “a profound failure of governance,” adding that “the incident displays the fundamental weakness of Facebook in managing its responsibilities to the people whose data is used for its own commercial interests.”

The report also said that Zuckerberg’s claim that “we’ve never sold anyone’s data” is “simply untrue.” Denham told the committee in November 2018 that “a tension exists between the social media companies’ business model, which is focused on advertising, and human rights, such as the protection of privacy.”

Consequently, the committee has made a number of recommendations to rein in Facebook and other social media companies’ attitudes toward data privacy, accountability, and transparency. These include making tech companies more directly liable for the content that is available on their sites, imposing a levy on them to support the enhanced work of the ICO, and establishing a code of ethics that will be overseen and enforced by an independent regulator with the power to set large fines for non-compliance. The committee also recommends that the regulator should be able to have access to algortihms to ensure they are operating responsibly, and that “inferred data” about individuals (such as political views) extracted from data models should be as protected under the law as personal information.

Facebook says that it is “open to meaningful regulation” and that it has already taken steps to check for harmful content and that political advertising is authorised and made more transparent. The company says that it also supports effective privacy legislation that “holds companies to high standards in their use of data and transparency for users.”

“While we still have more to do, we are not the same company we were a year ago,” said Karim Palant, U.K. public policy manager at Facebook, in a statement.“We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”