Obesity rates have risen ten-fold in the last four decades, meaning 124m boys and girls are now too fat.
Young children, pregnant women and other vulnerable groups can safely eat raw eggs, say UK food experts.
[unable to retrieve full-text content]
[unable to retrieve full-text content]
[unable to retrieve full-text content]
Measuring how many eggs a woman has left cannot predict likelihood of getting pregnant, experts say.
Every day I wake up at around 6am. I go downstairs, put the dog on his lead and take him for a walk. We go to the same park just around the corner. The one with the old oak tree by the lake where he does his business as he always does. He loves that tree. Then I walk him back home, have a bowl of cornflakes, read the news and check my emails.
After that, I leave the house and go to the tube station. On my way, I visit the local café where I order a latte from the same woman I order my latte from every single day, the one that always wears her hair in a bun and never wears lipstick. I don’t even need to tell her my order, she usually has it ready as soon as I get there – which is ideal, as it means I don’t have to speak to anyone in the morning before I’ve had my caffeine fix.
I’m a creature of habit. Most of us are and our behaviour is predictable. In fact – and you may find this uncomfortable to read – human behaviour is 93% predictable.
So recently one morning, when the woman handed me my coffee, I noticed that her hair was down and she was wearing bright red lipstick. I didn’t think too much about it but I did fleetingly think it was strange. It suited her. I forgot about it and went back to my routine. I had a train to catch.
Then guess what? A week later I saw her chatting to a good-looking bar man. My memory kicked in and I put two and two together. There was a new relationship in the air. How lovely.
Of course, it was only by chance that I noticed this change in the woman from the café, but you know who would have noticed this change instantly? A computer.
Changes in mood and behaviour can be subtle, so subtle that the brain often cannot detect these changes or save them in our memory. They often don’t stick and if they do, they only leave a slight trace. A computer however sees it as a big black binary mark. They have total memory recall and can be designed to retain information, analyse, compare, predict and interpret data. Computers can be taught to find patterns, learn from them and alert someone when something changes. In short, they can detect anomalies in behaviour patterns. They can spot the lipstick!
So, what are these patterns of behaviour? And can a computer really predict what we are going to do?
We are predictable
“Similarity breeds connection,” according to sociologists Miller McPherson, Lynn Smith-Lovin and James Cook in their paper Birds of a Feather: Homophily in Social Networks. Homophily means the tendency within social groups for similar people to connect on some level, across various relationships including marriage, friendship and colleagues from work. People of different genders, races, ages, classes, education and ethnicities all have different characteristics that impact who they choose to socialise with. To paraphrase Aristotle in his Nicomachean Ethics: “people love those who are like themselves.”
I’m specifically interested in how this principle can help us to safeguard children online. This principle, whereby similar people come together and form social groups helps machine learning systems to detect anomalies in patterns of behaviour and analyse when a discrepancy has occurred. For example, if your 14-year-old Zoella-obsessed daughter starts chatting to an 18-year-old punk metal fan or if your 12-year-old Chelsea Football Club supporter son suddenly connects with a 16-year-old boyband super-fan from Sydney, it could see that something was out of the ordinary and potentially not right. When these changes occur and they don’t make sense, computers are able to flag the connection and look deeper into the data patterns to see why this behaviour is breaking the norm.
The cadence of communication is important too; children and young adults tend to brag and banter with each other, which is a very distinctive pattern of communication. It helps us to pick up when people are posing as children and attempting to conduct grooming. Similarly, this cadence is well established during the build-up to a sext: it’s a ‘go on, go on, go on, send it’ style of exchange – very quickfire, much like children daring each other to do something they shouldn’t offline. The patterns of behaviour are distinctive and when statistical and probability analytics are overlaid, they can start to provide accurate patterns that can lead to predictable outcomes. For example, sexting is more common between the ages of 13 and 17 and increases with age. Boys receive more images than girls, exchanges commonly happen in a one-to-one situation and 89% happen at the beginning of a new relationship. The application of statistical analysis significantly reduces false positive outcomes.
But isn’t this all common sense? Well, not always. We are frequently too busy to pick up on these things, or are simply trying to give our children space online to explore, have fun and be themselves. With children on average sending and receiving over 200 messages a day, it becomes impossible for a human to spot delicate and intricate patterns. Not so in the world of AI and machine learning.
Mum and dad, for example, can drop the need to snoop, pry and spy on their children as the robots do all the hard work helping to safeguard children whilst also ensuring their privacy. It is this latter point that I get excited about. Children have rights too and apps that disclose to a parent what their children are sending and receiving are not only intrusive and invasive but do the safeguarding industry damage. Children simply find ways to avoid and bypass them.
The computer will see you now
This principle of automated discrete behavioural patterning is well-established in medicine. For example, Andrew Beck from Harvard Medical school conducted some research on women with breast cancer. He ran their details through a machine learning algorithm to see if the AI could identify whether the given biopsy he provided was cancerous or not. This would then determine what course of treatment the patient should go on. The only extra information he gave the AI was the longevity of the patient, whether they died within a week or are still alive etc. The AI was able to identify 11 signs that the given biopsy was cancerous. However, the medical community only knew around eight of them, the other three signs were unknown to the human eye, having never been picked up by a human before.
Similarly, when we look at behaviour and patterns of behaviour, we can see certain traits that crop up. Machine learning can spot a troll, predator, bully and a groomer by observing, patterning and analysing, it can then flag if it suspects a potential threat. We know that statistically, comments on YouTube that are between three and eight words have a 72% chance of being abusive and we know that an increased number of capital letters, exclamation marks and the use of first person pronouns is synonymous with the language of a troll.
By analysing these patterns of behaviour, machine learning algorithms can spot any changes that go against the grain. Regardless of whether they’re typical or not.
Let the robots lend a hand
The ability to track and analyse human behaviour is vital when it comes to being able to detect harm or potential harm. Computers can make sense of the confusing, emotive and sometimes scary online world. The algorithms can understand chaotic human behaviour and find patterns in linguistic traits, social media content and even likes on Facebook.
Using these systems, personalities can be patterned online, allowing for proactive and pre-emptive action to be taken to help reduce abuse and hate online, to spot that the quiet and shy girl is suddenly loud and flamboyant, or to flag cancerous cells before a human can even see them.
We all love to people watch. We poke our heads out of the living room window because we heard some people on the street or we sit outside a café watching people aimlessly wander by. We may notice that a person is wearing a green hat or is eating an apple, but we won’t observe much more. We need our computers to watch alongside us, to learn and analyse information in order to help us understand things we never would have seen – so that we can blithely get on with ignoring the majority of the world while enjoying a latte.
Now, if you’ll excuse me, the dog needs to find that tree.
If you fail a university course or lose your job because you are so distracted by your smartphone then is the phone manufacturer to blame? Is addiction the problem?
It sounds like a stupid question, but think about it slightly differently. The tobacco and alcohol industries both invest heavily in programmes designed to prevent and treat addiction – likewise with the gambling industry. Any industry selling a product that could potentially be addictive is forced to take measures to prevent addiction, such as funding awareness programmes or treatment charities.
You can argue that the measures are not enough. Gambling companies want to encourage gambling, not stop their customers logging in and placing bets, but most organisations are responsible. The gambling companies don’t want a wave of online poker addicts meaning their entire industry is more heavily regulated or closed completely. The drinks companies want you enjoy a good night out, but not to the extent that you rely on drinking their products every night and sleeping rough. Addictive products have a place in society when they are used responsibly.
But are we ignoring the addictive nature of smartphones? Try taking away someone’s phone today and see how long they can cope without it. Look around on a train or bus and see how many people are lost to the real world, gazing blankly into their phone. Phone separation, or battery status, is now a genuine source of anxiety for many people.
The Apple Chief Design Officer Jony Ive recently said that constant use of an iPhone is actually misuse. He uses an Apple Watch to filter the number of notifications he personally has to interact with. What he is implying is that most people are constantly distracted by their phone. The product is being misused and this is the man who designed the iPhone.
When Tim Cook was asked if the iPhone creates poor social behaviour he dodged the question, but now we are a decade into the smartphone era the data is starting to arrive. The Wall Street Journal recently published research indicating that college students who left their phone outside the lecture theatre – and therefore were more focused on the class – scored a full grade higher. Academics believe that the intellectual reliance on smartphones is having a seriously adverse effect on our mental skills, such as problem solving and creativity.
Apple and Samsung do all they can to encourage us to use our phones even more. Researchers suggest that the average American interacts with their phone at least 80 times a day. When the Financial Times profiled how British teenagers relate to their phones, they found that 13-year-olds feel a closer relationship to their phone than to other family members. The phone has become a family member. What happens when we move beyond smartphones to wearables and implants?
I’ve talked for several years to corporate leaders about how they need to change the way they talk to customers because the way that customers talk to each other has changed. It’s obvious really. When is the last time you called a family member for a catch-up? In fact when is the last time you called anyone or answered a call from a number that isn’t in your contact list? Voice calls feel disruptive today when compared to texting. It’s a lot to expect the recipient of a call to drop everything they are doing so they can focus on a conversation with you.
Conversation is now largely through text and social networking platforms. Families are held together by Facebook. Kids share activities from their day via Snapchat. I’m not an anthropologist, but I can see that in just ten years there has been a complete revolution in how humans interact and communicate. There must surely be an effect on how we process information and learn – we just don’t know what will change at this point. It’s still too early and change is coming so fast.
What is becoming clear though, is that for all the incredible communication benefits of smartphones there is a downside to constant distraction. It affects study, work, and relationships. How many times have you seen a couple in a romantic restaurant with both of them in a deep conversation with their smartphone rather than each other?
Will Apple and Samsung need to start behaving like Diageo, Philip Morris, or Paddy Power and accepting that they are manufacturing products that can potentially be addictive? I don’t think we have any detailed answers on this yet, but my suspicion is that they are going to need to face up to the problem of smartphone addiction and the first cases will not be far away.
One of my proudest moments is when I was 17-years-old and became the first teenager in the world to achieve 1m App Store downloads with my facial recognition app, Face Rate. The app eventually went on to get nearly 7m downloads in total before I licensed the software to News Corp and got offered the position of Head of Digital Product Innovation, but that story’s for another time.
Back then, I was just testing a bunch of different technologies; experimenting with building apps during the early days of the App Store. This new environment allowed me to play with unusual ideas in a world where people were taken by the novelty of being able to grill a digital steak on their phone’s screen or drink an electronic pint of beer by tilting their phone near their mouth.
Nearly a decade later and Apple’s thrown facial recognition technology into the headlines again at its recent Keynote where it launched the iPhone X. Granted, not all the headlines were great; there were one or two about the technology failing on stage during its live unveiling. That aside, the launch really got me thinking about how much technology has advanced over the last 10 years. Facial recognition’s gone from being a fun experiment for myself and others in the early days, to being adopted by some of the biggest companies in the world. Snapchat uses the tech to create playful filters which has encouraged hundreds of millions of downloads, and now Apple’s taken it a step further by introducing it for security functionality.
Facial recognition technology uses a whole bunch of libraries that are built to recognise many different features on a face; be it eyes, nose, mouth, or whatever. It maps all these points using different algorithms; literally hundreds of thousands which are within the software. These locate different parts of your face and computerise this into metadata. That’s how things like Snapchat filters work; they’re taking the metadata off the back of an image or face and apply silly dog ears and the like, which become very accurate. One of the reasons I think this technology’s so commercially successful is that it’s one of the first real user cases of augmented reality (AR) in action. AR’s just a layer of technology over real life and Snapchat pioneered this into the limelight. It’s perhaps a bit faddy, but a great example of how this would potentially work on a much wider scale.
The fact Apple’s latest handsets can now replicate your face digitally and use this as a security measure to grant you access to your phone – or bank account – is taking things to a whole other level. The difference between how difficult it is to replicate your face versus your fingerprint makes your device – all being well – so much more secure. To understand this, you have to think about how different our biometric thumb prints are from one another, well everyone’s face is different on another scale. This would suggest security will be tighter than ever.
Of course, as with all technology, there are potential pitfalls that need to be considered. Not just the fact your phone might not recognise your face and deny you entry, but the biggest fears I have are: will these devices continue to be hacked like we’ve seen in the past and, if they are, how damaging will this be? If my phone gets hacked and someone has access to my biometrics, they can go take loans or mortgages out in my name without me knowing; this kind of infringement could literally ruin peoples’ lives. Consumer credit reporting agency, Equifax, was involved in a huge hack in the US recently, whereby the company had a breach of more than 140m people. That’s not just emails and passwords, it’s things like social security numbers as well. That stuff is obviously incredibly valuable to us and in the wrong hands is very dangerous. Now, all of a sudden, we’re starting to add biometric data into our phones using things like Apple’s facial recognition. The level of security here needs to be unshakable.
For me, this is such a risky area. Everyone’s so focused on the positives, that they’re maybe not taking these issues into account. With the power of Apple, I agree there’s a very small chance of things going majorly wrong, but if it does – the risks are very real.
What’s cool to observe following the Keynote, however, is just how powerful these devices are becoming. The iPhone X is – in some cases – more powerful than a top-of-the-range laptop! This means the different things you can use your phone for is increasing at an exponential rate – a far cry from the digital pint. It’s clear Apple’s primary focus is on making a huge push into AR; and that opens up loads of exciting avenues. The updates in the App Store show us Apple now has dedicated space for AR-specific apps, which has already seen some incredible games being launched and we can expect a lot more.
Back when I was working on Face Rate, the technology was so premature that it actually didn’t work so well. That’s simply due to the processing powers of the iPhones and other devices on the market at the time; they just weren’t there yet. I thought it was a genuinely novel idea and believed if I could make it work on a mobile device, it would have the potential to go mainstream. The reality is though – from what I was doing to where it is now – it just wouldn’t have been possible.
Today, there are a million and one opportunities, especially as the technology continues to improve as it naturally will over time. Usually when something’s used by Apple, it’ll go mainstream. What might have started as being niche and something the company created itself, normally ends up popping up en masse. We just have to look at the digitising of music, and the life changing impact of the App Stores for evidence of this. If it’s effective and works well, then you can absolutely expect others to follow suit.
Apple’s recent event certainly indicates the company sees facial recognition as a massive part of what it becomes in the next 5 – 10 years, meaning we’re potentially on the cusp of another exciting and innovative new era. Let’s just hope it’s a secure one too.
As claims abound of a prosecco that leaves you feeling fine the morning after, we ask if they’re true.