Official RU-vid channel for Big Data LDN, the UK's leading free to attend data, analytics & AI event bigdataldn.com Next Event: 18-19 September 2024 FREE REGISTRATION!
Very Good - I have been in coding and reporting for many decades - and you are quite right. One of the most successful companies I worked for produced what I thought were very basic reports. Bog simple - but the customers loved them. Technically simple but the owner knew her customers so well and the reports had nothing to do with us or the tools capabilities - Beauty is in the eye of the beholder - or your customer
It would be nice to have a link to the slides....maybe with some way to link comment threads to each. I love3 the concept of NLP to the query language. I hate having to learn varying syntax for differing data structures implemented in competing products....I would love to have international standards for all common commands with knowledge bases which could each have linked (graphed) annotations for ongoing issues.
The problem ai see is each company defining its standard. Isn’t there already a standard for defining data contracts that organizations already use ? How does that relate to dcat? And Dublin core?
Combining LLMs with decentralized/distributed knowledge graphs would be even better because it provides for greater optionality and prevents vendor lock in. Projects like Origin Trail are already doing this at scale with clients like the British Standards Institute.
Great talk but one of the fundamental problems of neo4j is it's lack of scalability to actually handle large data. When your database grows into the hundreds of millions of nodes and relationships it becomes a nightmare to work in
A good start would to be to understanding GraphDB. Then, you’ll need to learn some ML algorithms that help you quantify features used for classification and generatlizations. It’s basically a quantification of the entities being featured to arrive at a weighted determination of interest, aka. a mathematically calculated guess based upon accumulated observed/measured “facts”. These facts, of course, observed and measured by humans (always subjective) is supposed to be less biased by virtual of the volume of statistical data considered, and thus makes people feel as though their decisions are more ‘justified’ because they are the more ‘commonly’ made eventualities observed; however, these types of determination, by always arriving at the most common probabilities, exclude the types of decisions that created people like Einstein, Musk, or any other exceptional decisions that made large impacts on human’s existence. ;)
The argument made by *some* doomers is not that we are close to AGI, but that when it eventually comes, we are doomed, unless we can figure out a way to align it. The remaining distance may be great in terms of capability, yet short in terms of time if we keep accelerating the development.
When I joined digital fineprint i was astonished to find the dev and data teams not only separate, but they didnt even talk to each other, it was madness. Very quickly i joined those teams together and we operated as one - and now we do exactly the same at Taveo.
Hello, I am working in deliveroo as a riderI am just interested what programs or systems do you use for planning or forecasting the best route for riders and drivers to pick orders? Might i would be interesting to see and to have practice to analyze? As well do you have plan or forecast or past data when you on shortage of riders and in which areas or cities?