I don’t know how else to say this, but Google has a god given right to your data. It says so in their privacy policy. Computers save us time, but they also cost us between 11 and 20 per cent of our time according to one study. And there are issues with AI generated code.
These and more top tech news stories.
I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.
A quick update on the latest Twitter woes. Over the weekend Twitter has had some accessibility issues. As usual, the explanations from Twitter are as useful as their “poop emoji” that they send to any journalist who asks questions.
The long and short of it is that Twitter has had some issues which Elon Musk blames on “aggressive scraping” of Twitter data which he blames on AI programs. Users have reported that they are getting error messages that say things like “rate limit exceeded.” This may be, as Twitter has indicated, the effect of limits that Twitter has imposed on users or it could be, as one programmer indicated, that Twitter has started DDoSing itself.
The BBC said it best. “Confusion at Twitter Continues”
Can somebody explain to me why we would trust the guy who runs Twitter to develop a self-driving car? Just askin’
Sources include: BBC
Google has updated its privacy policy, and it’s causing quite a stir. The tech giant now explicitly stated it reserves the right to scrape just about everything you post online to build its AI tools. That’s right, folks, if Google can read your words, they might as well belong to the company now.
Google says it uses this information to improve its services and develop new products, features, and technologies that benefit users and the public. But this new policy raises some interesting privacy questions. It’s no longer just about who can see your information, but how it could be used.
Imagine this: your long-forgotten blog posts or 15-year-old restaurant reviews could be ingested by AI models like Google’s Bard and ChatGPT. As we speak, these chatbots could be regurgitating some version of your words in ways that are impossible to predict and difficult to understand.
This development also sparks a fascinating discussion about copyright and legality. Is it legal for companies to scrape vast portions of the internet to fuel their AI? The courts are going to be wrestling with this question in the coming years, and it’s sure to be a hot topic.
In the meantime, Twitter and Reddit have made controversial changes to lock down their platforms, turning off free access to their APIs. This move is meant to protect their intellectual property, but it’s also broken third-party tools that many people used to access these sites.
So, what does this mean to you? Well, next time you post something online, remember: your words could end up training an AI model somewhere.
Sources include: Gizmodo
Even though our computers are more advanced than they were 15 years ago, they still malfunction 11 per cent–20 per cent of the time, according to a new study from the University of Copenhagen and Roskilde University. That’s right, folks, we’re wasting up to a fifth of our time on computer problems.
The researchers behind the study believe that there are major gains to be achieved for society by rethinking the systems and involving users more in their development. After all, who knows better about the issues we face than us, the users?
The study found that the most common problems include systems being slow, freezing temporarily, crashing, or being difficult to navigate. And these aren’t just issues for the tech-illiterate. Many of the participants in the study were IT professionals or highly competent computer users.
But here’s the kicker: 84 per cent of these issues had occurred before, and 87 per cent could happen again. It seems we’re dealing with the same fundamental problems today that we had 15–20 years ago.
So, what’s the solution? The researchers suggest that part of it may be to shield us from knowing that the computer is working to solve a problem. Instead of staring at a frozen screen or an incomprehensible box of commands, we could continue to work with our tasks undisturbed.
The researchers also emphasize the importance of involving users in the design of IT systems. After all, there are no poor IT users, only poor systems.
Sources include: University of Copenhagen
In a recent development, Microsoft and GitHub are making efforts to dismiss a lawsuit over alleged code copying by GitHub’s Copilot programming suggestion service. The argument put forth is that generating similar code is not the same as reproducing it verbatim.
The lawsuit was initiated by software developers who claim that Copilot and its underlying OpenAI Codex model have violated federal copyright and state business laws. They argue that Copilot has been configured to generate code suggestions that are similar or identical to its training data, which includes publicly available source code from the plaintiffs’ GitHub repositories.
The plaintiffs’ main issue is that Copilot can reproduce their work, or something similar, without including the required software license details. However, Microsoft and GitHub argue that the plaintiffs’ argument is flawed as it fails to articulate any instances of actual code cloning.
The tech giants also argue that the plaintiffs’ claim focusing on the functional equivalency of code does not work under Section 1202(b) of America’s Digital Millennium Copyright Act. This section of the law forbids the removal or alteration of Copyright Management Information, or the distribution of copyrighted content when it’s known that the information has been removed.
Furthermore, Microsoft and GitHub challenge the complaint’s assertion that they are liable for creating a derivative work simply through the act of AI model training. They argue that this is fundamentally a copyright claim and federal law preempts related claims under state law. Got all that?
The companies maintain that GitHub users decide whether to make their code public and agree to terms of service that permit the viewing, usage, indexing, and analysis of public code. Therefore, they argue, the site’s owners are within their rights to incorporate the work of others and profit from it. The case is set for a hearing on September 14.
Sources include: The Register
Hands up everyone who hasn’t copied some code from somewhere else when doing a new program. Okay, all the hands are up? So maybe copyright claims were potentially rampant before AI. But there are also some issues about accuracy.
The Mozilla Developer Network (MDN), a popular resource for web developers, recently introduced an AI-based assistive service called AI Help. However, the service is now under fire for providing incorrect advice. The AI Help service, based on OpenAI’s ChatGPT, was designed to optimize search processes and provide pertinent information from MDN’s comprehensive repository of documentation.
However, developers have reported that AI Help often gives wrong answers and even contradicts itself. It has been criticized for misidentifying CSS functions, erroneously explaining accessibility functions, and generally misunderstanding CSS. The backlash from the developer community has been intense, with many expressing their lack of trust in MDN due to the inclusion of AI Help.
In response to the criticism, an MDN core maintainer appears to have taken notice of the issue. As of now, the AI Explain function, a part of AI Help that prompts the chatbot to weigh in on the current web page text, has been paused. The future of AI Help on MDN remains uncertain.
Sources include: The Register
The social web is undergoing a significant transformation. Major platforms like Twitter and Reddit are experiencing declines, while others like TikTok and Instagram are shifting towards becoming entertainment platforms. The reasons for these changes are multifaceted, including economic downturns, investor demands for returns, and the rise of AI.
The shift from public to private, from growth and engagement to revenue generation, and from social media to entertainment platforms is reshaping the internet. The era of social media is giving way to the era of “media with a comments section.” The focus is now on entertainment and monetization, often at the expense of user connectivity and community.
The future of social interaction on the web appears to be moving towards group chats, private messaging, and forums. However, this shift leaves a void for a platform that can bring everyone together in a single space. The so-called “fediverse” apps like Mastodon and Bluesky, which are based on open protocols, could potentially fill this gap, but they are not yet ready for mainstream adoption.
The current state of the social web leaves users longing for a platform that feels like a good, healthy, worthwhile place to just hang out. However, such a platform does not currently exist. The downfall of social networks may be inevitable, but the need for a global water cooler persists. The question remains: where will everyone go next?
Love to hear your opinions on this one. I’m fighting to find time to keep our Mastodon site alive, but it’s a lot of work and it’s been difficult to find the time. Love to hear your ideas.
Sources include: The Verge
And finally
Roger Anderson, a California resident, has created a unique solution to deal with unwanted telemarketing calls. He operates a subscription service called Jolly Roger, which uses a ChatGPT-powered tool to engage telemarketers and scammers in conversation, with the aim of wasting their time.
The service, which costs about $25 a year, uses artificial intelligence to handle the interaction. The chatbots use a combination of preset expressions and topic-specific responses, all fed through a voice cloner to make the telemarketer believe they’re talking to a real person.
One example of an interaction involved a chatbot named “Whitey” Whitebeard, who engaged a caller attempting to fish for financial information. The chatbot gave nonsensical responses, dragging the conversation out and ultimately leading the caller to hang up after more than six minutes.
The goal of the Jolly Roger service is not just to frustrate telemarketers and scammers, but also to protect users from potential identity theft. Some of the chatbot calls can last up to 15 minutes, keeping the scammers occupied and away from potential victims.
Sources include: WDPE
That’s the top tech news stories for today.
Hashtag Trending goes to air five days a week with a special weekend interview episode called Hashtag Trending, the Weekend Edition. You can follow us on Google, Apple, Spotify or wherever you get your podcasts.
We’re also on YouTube five days a week with a video newscast only there we are called Tech News Day. And if you are there, please check us out. Give us a like? It needs a boost.
We love to hear your comments. You can find me on Linked In, Twitter or on our Mastodon site technews.social where I’m @therealjimlove.
Or if this is all too much to remember, just go to the article at itworldcanada.com/podcasts and you’ll find a text version with additional links and references. Click on the x if you didn’t like it, or the check mark if you did, and tell us what you think.
I’m your host, Jim Love. To our Canadian listeners, welcome back and have a Terrific Tuesday. To our American listeners, Happy Fourth of July.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : ITBusiness.ca – https://www.itbusiness.ca/news/hashtag-trending-jul-4-googles-new-private-policy-causes-a-stir-study-shows-computer-problems-waste-a-fifth-of-our-time-microsoft-and-github-attempt-to-dismiss-lawsuit-over-alleged-code-copying/125475