Feeds:
Posts
Comments

Cortana Analytics Suite

There were many highlights from day 1 that I’m still processing and pursuing to develop my initial thoughts and reactions. Rather than replay the announcements which many of my fellow colleagues and WPC attendees have already helpfully covered off in their various tweets, I was keen to augment the announcements with additional research and thoughts and in particular, assess its relevance to Business Intelligence customers. More on that shortly. Keep tuned.

With my background in Business Intelligence or as I like to think of it: improving productivity through the application of information, the pick of the bunch was the Cortana Analytic Suite. For the technologists amongst us, there may have been a sense of short change as far as substance accompanying the headlines from yesterday. There were some inferences that could be drawn from the ensuing demos of Cortana and PowerBI (absolutely fantastic I should add!).

As a slight tangent, the prominence of Cortana beyond the occasional use with my phone wasn’t apparent until yesterday. Cortana is at the fore of how Microsoft are developing human interaction with applications and whilst your mind may be rushing to the frustrations you may have experienced whilst talking (or shouting after the second attempt of stating your date of birth) to phone routing systems and some of the earlier speech recognition systems, I encourage you to use the current distribution of Cortana available as part of Windows Phone or Windows 10 (Inside Program). This is exceptionally accurate and intelligent. It’s intelligent because not only does it learn your linguistic idiosyncrasies but also learns and acts on your digital foot print. This includes search history, contact list, calendar and location. Collectively, this allows Cortana to remind you that you need to buy a bunch of flowers for your wife when you walk past the florist as it’s her birthday in two days’ time.

Returning from my tangent and back on to the core topic, here’s what Cortana Analytics Suite consists of:

Source: http://www.microsoft.com/en-us/server-cloud/cortana-analytics-suite/what-is-cortana-analytics.aspx

The eagle-eyed amongst us will notice that much of what is included above belongs to the Azure stable of products and also that many of these are not brand new products; we have a trusted and established set of technologies, relatively speaking, which are ripe for adoption today. Also note that we don’t have a binary situation where the stack needs to be adopted in an all-or-nothing approach. Your specific requirements will drive the shape of the technological architecture. With the privilege of working with many different customers to understand requirements and translating these to technical architectures, I would be very happy to share my thoughts and experiences on any specific situations that you may be experiencing.

Also, worthy of note is the absence of on-premise software editions. This was a re-occurring theme across all the sessions from yesterday. The Microsoft cloud solutions are incredibly rich, functional and flexible and in turn, a rapid platform for application development and ultimately, an approach to delivering real business value. This of course doesn’t mean that the current on-premise solutions are being abandoned or become irrelevant. We continue to work with many customers where we are commissioning new and maintaining existing applications using on-premise software editions.

I’m excited to talk to customers and help them understand the benefits available through migrating their current solutions and developing new solutions which leverage this next generation system architecture.

Please get in touch if you have any thoughts or questions. Would love to hear from you.

 

I had a very interesting conversation with a customer here at the SAP User Group Conference yesterday; interesting but not unusual unfortunately. I suspect that this situation relates to many other organisations and I hope there is some value in sharing this.
Analytics was described as the “right aircraft and wrong airport” by this longstanding ERP customer. Analytics was a battle that had been fought many times but lost on each occasion. The execs failed to recognise the value of analytics and perceived it to be another costly and audacious IT project which is unlikely to deliver results. We started exploring the situation further and figuring how we could address these misconceptions and deliver information in a way that isn’t deemed to be an “analytics” project and instead, is an intrinsic improvement in business processes. This is a very important point and one that is frequently rehearsed in conversations with other customers. Analytics is typically a separate project, team, system, cost centre and dataset in most organisations and in practice most businesses use information stemming from an analytics project to make operational decisions, e.g. which customers purchased X and how are sales tracking this month compared to last month and last year, etc. This information is much better suited and leveraged as a part of the business process in question and therefore doesn’t need to be badged and feel like a separate activity. Detaching it in this fashion often diminishes its value and inhibits user adoption.

Returning to our conversation, it didn’t take long for us to land (using our airport metaphor) at a momentous business challenge; SAP Pricing Conditions. This customer has several hundreds of thousand SAP Pricing Conditions where customers have unique pricing terms for different products and the reasons for this are primarily legacy. Over the years Account Managers have agreed different terms and this has led to a proliferation of these records. There is little evidence to justify the current terms and for expediency, the organisation continues to process sales on this basis. There is little science and validation on how and why pricing is set and worryingly it has little regard to the current buying behaviour of a customer. This gives rise to situation where margin is being shed as the customer is no longer purchasing products/quantities that were envisaged at the time of setting the pricing. Additionally, it presents a huge administrative overhead to maintain these within the ERP system.

We agreed that a subtle but effective improvement in this area could deliver tremendous value and at the same time, address the aforementioned misconceptions. The ability to analyse these customer pricing terms at the point of processing sales or the Account Managers meeting with the customers would ensure that the customer is benefitting from terms which are suited to their current buying behaviour and the organisation is not shedding margin due to out dated pricing terms.
I thought this was a great example of how information can be incorporated as part of a business process and an illustration of how analytics when delivered in a suitable form, can unlock significant business opportunities.

Please do come and visit us at the itelligence booth where we would love to understand how your organisation operates and leverages SAP Technology.

Placing the customer at the heart of it, a 360 view, customer centricity, etc. The list of clichés are endless but I wanted to share with you a relatively recent innovation from SAP; Customer Activity Repository (CAR). This also makes the claim of customer orientation but having been acquainted to it, I think there is merit to its claims and in particular, retailers could leverage tremendous value from its capabilities.

 

If you are a retailer that has a multi-channel presence, relatively large transactional throughput and currently use SAP Applications, this is certainly one for you.

 

One of the common challenges experienced by retailers tends to be the isolated view of their sales channels and the inability to view customer sales across the multiple channels that they operate within. CAR has been designed to address this along with unleashing a raft of capability which will substantially improve the effectiveness of retail execution. The solution principally caters for Sales and Inventory data and both are loaded from SAP Retail on a real time basis. CAR is a series of data models and Virtual Data Models (VDM) that can be deployed to SAP HANA. More on the technical elements shortly. Once deployed and configured, the business has access to a series of engaging and valuable outputs. This includes a Dashboard providing access to all pertinent metrics along with a range of SAP BusinessObjects content purposely designed to provide engaging sales and stock information to aid retailers. As an example this includes real time alerts based on Shelf Availability; this is based on advanced algorithms which can track sales frequency along with inventory volumes and alert users of lines that may be in stock but not on the shelf. Similarly, Demand Forecasting is calculated through leveraging the HANA predictive engines and based on the sales and inventory data it is able to harvest.

 

If any of you are familiar with the RDS for Shopper Insight, think of this as the evolved and much-embellished version of that. It consumes POS data but also, eradicates the dependency on NetWeaver BW and is capable of running POSDM as part of the CAR framework.

 

Technically, this is a native SAP HANA application which can be deployed in a number of scenarios: on-premise, side-car, integrated and with Suite on HANA and finally Cloud. I think that covers everything but please get in touch if you are not represented by any of these install choice and we can discuss further.

 

In summary, an exciting solution which ships as an RDS and can be instantly deployed to deliver tremendous value to a retailer in a very short deployment time scale. It’s underpinning by SAP HANA ensures optimal performance and flexibility when it comes to enhancing the content to meet your specific requirements.

 

Day two wasn’t as epic as yesterday in terms of elapsed duration but closely contested the intensity from yesterday. We seem to have lost the sunshine all together which should make for a useful transition back to the UK tomorrow.

Today was structured very differently to yesterday as the keynote was followed by three distinct streams: Innovate, Sell and Implement. With the Real Time Data Platform RTDP featuring in the Implement stream, it was an easy decision to start there and then work my way back to the Sell stream where we covered sales enablement and USP’s. I favoured the analogy from this morning, suggesting that the buffet was laid yesterday and today was a case of working our way through it; In which case I surely opted for the meat and potatoes J

 

To be honest I was slightly underwhelmed by the session on RTDP since I was expecting something more evolved than the usual jigsaw slide as presented on PowerPoint. I was hoping for substance around what this actually means in practice and how customers can realise value through deploying SAP HANA and Sybase IQ as part of the RTDP to satisfy analytical workloads. Whilst the merits of Sybase IQ and SAP HANA are clearly understood when considered in isolation, I wanted additional clarity on how they can cohesively operate as a single unified platform. Perhaps my expectations were somewhat inflated as after all this wasn’t TechEd…

 

If nothing else, the session from this afternoon has prompted me to conduct additional research and start to form clarity over what the complement of SAP HANA and Sybase IQ have to offer a customer seeking an Analytical solution. Broadly, there are two scenarios that I was interested in, BW on HANA and HANA for agile data marts. In both of these situations, I was interested to understand how Sybase IQ can interoperate in the respective system architectures.

 

Let’s address BW on HANA first as this has a simple answer. BW on HANA now supports Sybase IQ as a Near Line Storage NLS platform. This allows effective management of data ageing and the most common scenario we encounter is the need to transfer “old” (warm, cool, etc.) data to IQ and persist current (hot, boiling!) data in SAP HANA where it is certain to provide the most optimal performance. Partitioning the data in such a horizontal fashion is also known as the data temperature gauge. SAP BW as the Enterprise Data Warehouse EDW Application takes care of this and and acts as the central query orchestration layer which seamlessly fetches data from the appropriate cache in response to a BEx query which may ultimately have been instigated by running a SAP BusinessObjects BOBJ report leveraging the BICS interface. So, in summary we could have a single parameterised report which can be run across time and the SAP BW system would intelligently assemble the resultset by coordinating data from across the two caches. Simple.

 

Since NLS is not currently natively supported by HANA, trying to achieve a similar result without SAP BW could prove to be a challenging undertaking and here are some considerations for anyone planning to embark on this journey:

  • Smart Data Access – This functionality can be used to create Virtual Tables which are defined to point to Sybase IQ Tables residing on a remote server. These Virtual Tables do not appear to be supported by the Information Models and therefore prevent their incorporation in a Universe built over an Information Model.
  • Universe – A relational Universe can be built across SAP HANA Tables and my expectations are that this would support the Virtual Tables described above. Alternatively, we can leverage the multi-source Universe capability of BOBJ to converge these. However, his presents a further quandary around how these Tables can/cannot be joined given their typically horizontal partition…
  • Information Model – Further investigation is required into whether it is possible to incorporate the physical HANA Tables and Virtual Tables into custom Calculation Views using SQLScript code.
  • Sybase Replication Server – This can be used to replicate data across the HANA and Sybase IQ databases
  • Data Services – Similar to the above but this would also be required to purge data as the window for “old” and “new” data changes over time

 

A few steps you may be thinking…the good news however, in response to my question earlier today, the friendly SAP rep suggested that NLS with Native HANA is not supported “yet”. This was followed by a wry smile and the suggestion that SPS 7 is due next month. I will leave you to form your own conclusions from this.

 

The inevitable challenge I foresee with either of these approaches (BW on HANA or HANA for agile marts) is the inherent dependency on a Calculation View to serve BOBJ Explorer. This architectural constraint appears to inhibit us from exploiting the true potential of the RTDP unless we can find a way to incorporate Virtual Tables as part of the SQLScript extensions. The same considerations also apply to the other BOBJ clients that do not support the BICS interface.

 

Ultimately, not all data has the same value and therefore the flexibility and economics of being able to determine an appropriate target store can be extremely valuable. The learning from this afternoon followed by a further investigation is starting to lead me to the conclusion that SAP BW should be your default choice when considering Sybase IQ as an NLS platform. SAP BW has been enhanced to seamlessly and intelligently orchestrate queries which span the two data caches and more effectively handle the data ageing challenge I set out to answer.

 

Finally, thanks to SAP for organising a great event and farewell to everyone I met over the last two days. Wishing you all safe onwards travels.

Having weathered the storm on the way out of the UK and everything that came with it (the treacherous journey to Heathrow and a delayed flight), we arrived to glorious sunshine in Barcelona. Ironic hey!

Proceedings got underway nice and early (not so bright and early today as it had clouded over) this morning with several keynote speeches from the SAP team, namely Chano Fernandez and Steve Birdsall. This was followed by a thought provoking session by Donald Feinberg, a Gartner Analyst. Irfan Khan delivered a captivating session immediately after lunch followed by an equally enthralling session by Snehanshu Shah (Will be watching out for those vending machines to hit the UK!). Various panel discussions and a plug from HP then took us to stumps. A satisfying days play and further action to follow tomorrow, to continue with the metaphor.

Now to some of the highlights….

There was a restatement from SAP on the significance of the D&T portfolio and aspirations for its growth; double digit growth was a commonly cited goal although the various speakers were at pains to point out that this did not constitute a financial commitment; I didn’t want to misrepresent this point in any way J. I also noted the noble mission statement to “leave no product behind”. I think that’s a very powerful message and is borne out in the manifestation of the SAP HANA Data Platform aka SAP Real Time Data Platform. There is incredible value to be gained from the augmentation of these different technologies and unless we understand their respective roles and importantly their collective synergy, we stand to short change ourselves and the customer. This also relates very closely to the Nexus of Forces but more on that shortly….

 

Donald presented a number of incredibly powerful concepts which really resonated with me. Let me share some of these with you along with my supplementary thoughts.

  • Best practice or ex practice – the example provided related to tapes but one that is closer to me is the Data Warehouse. In my opinion we too readily accept Dimensional Modelling, Denormalisation, Staging, etc. as irrefutable best practices. The best practices must be challenged and are being challenged by the advent of in-memory computing and we should be open to re-establishing the best practices suited to the modern era and specifically to address the business models and technological environment that we are experiencing before us. It’s no longer necessary to be so prescriptive and beholden to principles such as storage of data for analysis, the requirement to store everything that is required for reporting in a single database and denormalisation of all this data. Whilst the DW concept remains applicable, it’s implementation has profoundly changed.
  • Business Orchestration Model – related to the above point, it’s simply impractical to store data of all size and format into a single database, despite the supremacy of the SAP HANA Data Platform. We may have petabytes of unstructured data, social media data, web logs and other streaming data that is required to support decision making but this doesn’t and cannot translate to the need to store every byte of this data in a purpose-built Data Warehouse. Instead we should be considering seamless Business Orchestration Models which allow users to answer these questions and have the SAP HANA Data Platform determine the appropriate source for this data. SAP HANA SPS 6 introduced the Smart Data Access functionality that allows us to link back to a number of these data sources and retrieve data for localised processing, e.g. Hadoop.
  • Nexus of Forces – a concept that really came to life for me through the vivid illustrations provided this afternoon. This refers to the four interdependent trends: social, mobility, cloud and information. Each are compelling in their own right but the key is to drive synergy through the collective application and adoption of these. This reminded me of the incredible outcomes we were able to deliver for a recent customer through providing sales information via mobile devices but then also the ability to socialise this and share with other parties through the cloud infrastructure. This streamlined decision making and pushed it right to the customer touchpoint where it mattered and then provided a feedback loop to collaborate internally and externally using different permutations of this information.
  • Performance – old hat with HANA you may think….the point this afternoon related to how we can now run incredibly powerful predictive algorithms in SAP HANA through leveraging the PAL functionality and as a consequence, allow businesses to make effective decisions before the actual events have unfolded. This can easily be taken for granted and we should remember that such onerous computational activity previously lagged behind the actual events and was merely of interest. We are now equipped with this information to influence and shape the future; The WHO proactive polling of disease profiles gauged through social media was a poignant example.
  • Total Cost of Ownership is very different to Total Cost of Acquisition – Whilst both are important, it’s equally important to not confuse one with the other. I would also add to this that there has been a number of studies conducted into the price comparison between disk and RAM and whilst the perception is that RAM is a lot more expensive than disk, when you compare the two on the basis of performance per second, the RAM is a far economical choice.

 

When you’re at a showcase event laid by SAP, you are certain to increment your tally of acronyms and HTAP was the order of the day. Hybrid Transactional Analytical Processing is the new term coined to describe the convergence of OLTP and OLAP workloads. The ability to run applications and analytics in a single environment, along with the unprecedented improvements in performance and everything else that comes with the SAP HANA Data Platform for me is why HANA reigns as a database platform. This has profound implications across many fronts: technical, management, economics, latency and user experiences to name a few. Most recognisably for most businesses, this spells an end to batch processing and eradicates much of the “best practices” we are currently beholden to in the domain of Data Warehousing.

 

In summary, a long but interesting day. Thanks to everyone who presented today. I hope that this has been a useful recap for those who attended and for those who didn’t attend, a reason to watch out for my write-up tomorrow. Same time, same place….adios.

I stopped over at one of the prominent SAP HANA stands before making my way for what continues to be fine (and often over-indulgent!) lunch at SAP SAPPHIRE and had a very interesting discussion and demonstration from the equally helpful guys on the stand. I wanted to share with you the extent of support you can expect from SPS5 around unstructured data and how I envisage this helping businesses.

As discussed in my previous blog SPS5 brings with it a spectacular array of capabilities which will help businesses solve their data challenges. Having worked in Data Warehousing and Business Intelligence for many years, the perennial problem continuing to spook businesses is the growing volumes of unstructured data. A simple search on your preferred search engine will provide a measure of this. Here is the first result I fetched during my search:

  • 80 percent of business is conducted on unstructured information
  • 85 percent of all data stored is held in an unstructured format
  • Unstructured data doubles every three months
  • 7 million web pages are added every day

We often talk about unlocking value from data and delivering information in the hands of the business users but the reality is that this has overwhelmingly focussed on the structured data deposits and in the shadows of this success looms an untapped source of value and competitiveness locked away in the unstructured counterparts. Of course we have made strides through the use of API interfaces with social media networks, text processing capabilities during the Extract Transform and Load (ETL) operations and working with partners such as Netbase in the Social Intelligence space but we have been stretched when trying to match the sophisticated analytical capabilities we enjoy over structured data when it comes to addressing unstructured data.

So what’s changed I hear you asking? I have already mentioned the embedment of the Text Analysis processing in the SAP HANA platform, along with the announcement of Extended Services (XS). You may already be aware that we have Binary Large Object (BLOB) support in the platform. When these capabilities are considered collectively, we have everything to solve our perennial challenge except a suitable business interface allowing the most important part of the process to take place, an actionable insight. Well, we have it now!

SPS5 will ship with what has been described as a HTML5 “Framework” based upon the new XS engine in SAP HANA which can quickly be adapted to provide a web based user interface which allows a user to enter a search term and as the search term is being populated, suggestions are continuously provided to the user based upon the data that resides in SAP HANA; much like the bing or Google search functionality. The search terms entered by a user or provided as suggestions whilst the user is populating the search fields emanate from the SAP HANA contents but this is where the sophistication of the BLOB support and Text Processing must be considered. The collective power of these components allow the SAP HANA engine to present nouns, be that names of places, people, job titles, etc. along with the text contents of a document, be that a Microsoft Word document, PDF, etc. The contents of such documents are included in a Full Text index which resides in memory. Therefore, this very simple but exceptionally powerful search capability allows a business user to search through both structured and unstructured data stored in SAP HANA through an interface which is akin to any familiar web based search engine. As a result of the user defined search, the results are returned along with corresponding analytical facets which are customisable and resemble the type of visualisation one would experience in SAP BusinessObjects Explorer. The user can then select the appropriate search result and drill into the detail and as part of this, fetch out and open any corresponding documents. The associative capabilities built into this framework also allow a user to view similar search results again akin to the type of experience one would expect in a conventional search engine.

Potential use cases may include the need to simply and seamlessly search across both structured and unstructured data to identify information relating to a given term but also extends through to sophisticated brand management which may require detailed analysis of social media expressions, documents and patterns and contexts pertaining to the various pieces of text. I am interested to learn about the evolution and maturity of this application and specifically any potential integration of with the rest of the SAP BusinessObjects suite but a great illustration of how SPS5 will come together to solve real business challenges nonetheless.

For the techies amongst us, this is currently designed to interface with an SAP HANA Attribute View but I expect that this will evolve to include Analytic Views, allowing the search criteria and results to correspond to measures. The measures are currently limited to the instance count of a given attribute based on the search definition.

SAP HANA continues to be an unescapable phenomena at SAP SAPPHIRE where speakers, booths, advertising and coffee-machine conversations are all reminding us of the central prominence this now has within SAP and in turn for our customers. I have spent much of my time at the conference talking to representatives from SAP about the technology and the plans for its evolution. I am excited to share the highlights with you.

Firstly, the next major release will be SP5 and is scheduled for general availability towards the end of this year. In parallel to the rampant development of the on-premise edition, we now also have SAP HANA in the Cloud edition. This is available through Amazon Web services and is a real breakthrough in providing an on demand, elastic solution which is both instantly and economically available to customers for fully-fledged production usage. The latter is a key point and not be confused with its predecessor which was limited to development workloads. It has some limitations amongst which is the size limit of 62GB but a compelling proposition nonetheless. Expect further announcements around virtualisaton of SAP HANA in the coming weeks.

Returning to SP5, we can expect a number of enhancements as part of this and the following sections will provide an appreciation of this along with my thoughts.

OData will become a supported protocol, making it easier to consume data in clients such as Microsoft Office. This is a significant development and extends the reach of SAP HANA into the enterprise and in particular allows PowerPivot developers to seamlessly consume data through this interface in much the way that they can currently with other data sources. Given the prolific use of Microsoft Office within businesses, this will help deliver pervasive information experiences and help increase the SAP footprint in what may traditionally have been Microsoft oriented IT landscapes. Similarly, we are also seeing a trend towards BI as a service and this is prevalent in industries where customers want to share information with their customers, usually in B2B scenarios and the recipient would like this information in the form of a data feed which they can then incorporate within their Data Warehouse implementations. The publication of this data as a web service would make this an effortless task.

There has been further simplification through merging the existing Business Function Language BFL and Predictive Analytics Library PAL into one single, Application Function Language AFL. Yet another TLA to remember unfortunately, but hopefully allows us to forget the other two. I hope to have access to the SP5 build very soon and will examine if this has introduced any additional capability.

There was also a mention of Text Analysis in this next release and this was being discussed synonymously with the current functionality available in Data Services 4.0. I am making a slight leap here but SP5 appears to make this available within the database engine and therefore significantly ramping up the support for unstructured data; be it the identification of entities in respect to whether they are names of individuals, places, products through to analysing the sentiments being expressed. This would be much welcomed and open up enormous possibilities in wading through valuable stores of unstructured data and have this happening directly inside the database engine and without incurring data latency as would currently be the case through using the Text Analysis Transform in Data Services. Unstructured data in both on-premise and off-premise formats represents huge opportunity for businesses and is often neglected due to technical constraints that have plighted businesses to date. This opens up incredible possibilities to respond to social media expressions and manage your brand and loyalty in a way that has never been possible. Increasingly, unstructured data is being monitored and analysed alongside structured and quantitative data to provide a consistent interpretation and appropriately contextualised information.

Multiple instances on a single appliance will be supported in the case of non-productive environments. This simplifies the creation of Development and Test environments. SAP HANA One is also a candidate for consideration here. The provisioning of this has been simplified through the use of configuration wizards. This provides much needed clarity and allows customers to effectively and more economically maintain discipline around Development, QA and Production practices. I expect further developments in this area through support of virtualisation.

The Backup functionality is currently handled from the Studio Client. This will be extended through integration with third party tools. This is yet another stride to ensure that SAP HANA is truly enterprise-ready as it removes any hurdles with complying with backup and recovery procedures and it can now seamlessly fit in with the wider IT strategy. On a similar note, the integration with SAP Solution Manager has been enhanced by removing the need to connect the SAP HANA system to the internet to download and apply updates. You can now maintain the SAP HANA system through SAP Solution Manager. On the subject of maintenance and recoverability, SP5 introduces the concept of Warm standby which enhances the current disk-level replication capability. The new replication functionality will take place at the SAP HANA Data Store level resulting in data being committed to memory and hence reducing the recovery time in the event of failure.

Event Stream Processing ESP which allows the monitoring and capturing of Complex Events has been more closely integrated with SAP HANA, with this now being a supported destination. ESP is typically used in situations where we encounter a data source which is highly transactional and frequent in nature and examples of this include sensors, machine outputs, etc. The supplication of SAP HANA will result in dramatic reduction in latency, be that latency in the form of data, analysis or action. Users will have instantaneous access to granular data collated through the ESP network and the ability to analyse this through familiar and consistent formats via the SAP BusinessObjects suite.

Other improvements include usability improvements to SLT and a rearrangement of the SAP HANA Studio. The client tool has been redesigned to simplify the creation of Attribute and Analytic Views through merging this process and also introduce features such as code debugging and intelli-sense to help developers build scripts. This provides a mature development environment which users have come to expect and removes the need for the multiple steps currently required for debugging and setting traces…phew!

Lastly and possibly most significantly, the next release boasts of a new capability knows as Extended Services XS. This provides web serving capabilities built within SAP HANA which again simplifies application architectures. The current Information Composer will be amongst the first SAP applications to leverage this in what will be a HTML5 application supporting text search and an improved set of visualisations. SAP HANA continues to evolve into a comprehensive Platform which goes well beyond “just being a faster database”. Increasingly SAP HANA is delegating processing from what would traditionally have taken place in the application tier to the database tier and this is another example of this. I intend to provide additional detail on this in the coming weeks.

At risk of squeezing the next one in, Geospatial support will also be introduced in the next release. I haven’t been able to ascertain the extent of this but expect this to provide geocode support and query functions such as Points, Lines, Polygons and Buffers. Again, more on this as it becomes known.

Signing off to start learning about Mobile Business Objects in Sybase Unwired Platform…

Follow

Get every new post delivered to your Inbox.