But often victims don't know where to turn or don't get the support they need, the police and prosecutors struggle to keep up with new forms of crime and the law can feel out of date too. And some organisations struggle to find their way between protecting free speech and preventing criminal abuse.
New draft guidelines from the Crown Prosecution Service list four kinds of online abuse that should be treated as crimes under existing law:
The CPS propose that the first three “should be prosecuted robustly”. But in the fourth category they are much less likely to think that prosecution is "in the public interest”.
What do you think? Is this the right approach to prosecution? What more action is needed? Have you got personal experience? Join our debate here. Or link to the CPS consultation on their guidelines here.
Are current laws out of date? Most of them were drawn up long before social media.
Campaigners have put forward different ideas including:
But what's your view. Tell us here.
Even where the law is clear, many people have raised concern about the police ability to pursue online crime. And in different cases the police have been criticised for failing to take threats seriously, but also for pursuing cases that shouldn't be prosecuted.
There are questions about the level of training, expertise, victim support, police resources, and whether this kind of crime should be pursued by specialist regional and national units or whether every local officer should be able to deal with it.
The Chief Constable for Durham leads for the police nationally on both violence against women and cybercrime. He's drawing up new programmes for tackling online crime and wants to work with Reclaim the Internet on what needs to be done.
So what's your view? What action do you want by the police and who do you want them to work with to get this right?
Organisations such as employers, trades unions, political parties, membership organisations all have a potential leadership role to play on tackling online abuse.
Some employers do great work supporting victims of domestic violence. Trades unions have great expertise in defending people against harassment or discrimination in the workplace. And many membership organisations including political parties have standards for behaviour for members to prevent discrimination, sexism or racism.
So why don't we have the same standards over online abuse?
Teachers report being abused online by pupils and parents. Many schools and teaching organisations have had to develop policies as a result. But what could other organisations learn - especially if people are exposed to abuse as part of their job?
Can political parties, campaign groups, businesses or trade unions set standards for their own members to follow?
Employment advice often focuses on how an employee’s conduct online can affect the image of their respective company or organisation. But how they affect their colleagues, fellow members or others is equally if not more important.
How far should they be held to account for the conduct of their members or employees online?
Those who've been targeted by online abuse often say how alone they felt - unsure where to get support, whether to ignore it or how to fight back.
We want to collect some of the best ways to take on the trolls.
What's your best response? Ignoring them? Exposing them? Laughing at them? Reporting them?
"Don't feed the trolls", "it's just a spotty teenager" they tell us. But how do we best take them on...? Unleash a robot swarm to troll them back?
The choice between ignoring and publicly shaming can be hugely difficult - particularly if some of us don't have the writing skills and wit of JK Rowling to call upon.
Does responding to abusers show strength or only inflame the problem?
You wouldn’t stand by if you saw someone being harassed or threatened in the street. So why do we online? Do we have a responsibility to step in? Or can it end up making the problem worse?
And when things start to spiral out of control, does reporting abuse get us anywhere?
With almost a quarter of young people targeted by online abuse, how do we ensure this does not become the ‘new normal’ for the next generation?
Many believe a good understanding of personal relationships offline, translates into how people behave online. If young people are confronted with abuse, intimidation and harassment online – and are not being told that they shouldn’t have to stand for it– we have a problem. If sex and relationships education was compulsory in school, as End Violence Against Women (EVAW) and countless others suggest, we would have fewer young people believing this sort of behaviour is appropriate and fewer thinking it was acceptable or inevitable.
But the internet is also a force for good - can we promote positive messages about relationships, expectations and acceptable behaviour online too? Can the power of positive communities and role models on the internet counteract the negative impact of abuse and bullying? Are we using them to their full potential?
How can we stop online abuse being normalised for the next generation?
Does education help and if so, what should schools be teaching?
Tell us what you think
Different websites, social media and publishing platforms set different rules and standards.
Just as a pub or club landlord chooses when to throw someone out if they are harassing or intimidating others, platforms have to choose how to respond to complaints, when to intervene, or how to work with the police and others if crimes are committed.
Some online platforms and publishers have engaged in detailed debates over what their approach to online safety and online abuse should be. Some have very little.
Here's what the Guardian have said about "the Web we want", the responsibility and challenges facing publishing platforms and the analysis they have done of abuse on their site: https://www.theguardian.com/technology/2016/apr/12/the-dark-side-of-guardian-comments
Here's some of the questions for any platform to answer:
We've decided on our consultation website to protect people's privacy but not allow anonymity. So people can share their experiences online and be assured that their privacy is respected. But if they are abusive we know who they are if we need to exclude them or if the abuse is criminal to report them to the police.
But what's your view?
What responsibility do you think platforms have to make the Internet a safe place for everyone and what changes do you want to see?
Extra reading…