07 May 2017 @ 10:03 AM 

Upgrading library components across an entire suite of applications is not without it risks, as one may shortly learn when upgrading from Npgsql 2 to Npgsql 3. Though it applies between any version- who knows what bugs might be added or maybe even bugs were fixed that you relied upon previously, either intentionally or without even being aware that the behaviour on which you relied was in fact unintended.

As it happens, Npgsql 3 has such particulars when it comes to upgrading from Npgsql 2. On some sites, and with some workstations, we were receiving reports of Exceptions and errors after the upgrade was deployed. These were in the form of IOExceptions from within the Npgsql library when we ran a query, which failed because “a connection was forcibly closed by the remote host” form. Even adding some retry loops/attempts didn’t resolve the issue, as it would hit it’s retry limit, as if at some point it just refuses to work across the NpgsqlConnection.

After some investigation, it turned out to be a change in Npgsql 3 and how pooling is managed. In this case, a connection in the pool was being “closed” by the postgres server. This was apparently masked with previous versions because with Npgsql 2, the Pooled connections would be re-opened if they were found to be closed. Npgsql 3 changed this both for performance reasons as well as to be consistent with other Data Providers; This change meant that our code was Creating a connection and opening it- but that connection was actually a remotely closed connection from the pool, as a result attempts to query against that connection would throw exceptions.

Furthermore, because of the nature of the problem there was no clear workaround that could be used. We could trap the exception but at that point, how do we actually handle it? If we try to re-open the connection we’d just get the same closed connection back. It would be possible to disable pooling to get the connection open in that case but there isn’t much reason to have that only take place when that specific exception occurs, and it means having added handling such that we handle the error everywhere we perform an SQL query- and that exception might not specifically be caused by the pooling consideration either.The fix we used was to add new global configuration options to our software which would add parameters to the connection string to disable pooling and/or enable Npgsql’s keepalive feature. The former of which would sidestep the issue by not using pooled connections and the latter which would prevent the connections from being closed remotely (except when there was a serious problem of course, in which case we want to exception to take place). So far it has been successful in resolving the problem on affected systems.

Posted By: BC_Programming
Last Edit: 07 May 2017 @ 10:03 AM

EmailPermalinkComments (0)
Categories: .NET, C#, Database, Programming
 05 May 2014 @ 12:35 AM 

Quick, Short post. I have a lot of draft posts that I don’t know if I’ll ever finish so I’m trying to go with a policy of just aiming to write shorter posts- if they end up getting longer, so be it.

Jasper Reports recently updated to 5.5.2.

It brought with it quite a few changes, additions, and revisions- as well as Deprecation.

Those familiar with Jasper reports may be familiar with the method in which it is used to give parameters for various capabilities. In this case the existing parameter methods have been deprecated- in it’s place they have created a rather confusing Object-based heirarchy. It seems like it could work just as good.

There is another side to this, however. Our current Jasper Reports adapter works fine as it is- so this “improvement” only causes more work to need to be done for us. It got me thinking somewhat on the troubles of being dependent on external APIs and libraries, even if they are Open Source you are still somewhat tied to what the library publishers decide should go. We cannot easily distribute modified versions of the library, after all.

On a related note, JasperSoft, the company behind Jasper Reports has been bought by Tibco. This isn’t, necessarily, a bad thing, but it could go either way. And it is certainly a change that may cause issues in the future- much akin to the wariness of Oracle now owning MySQL.

Posted By: BC_Programming
Last Edit: 05 May 2014 @ 12:35 AM

EmailPermalinkComments (0)
Categories: Database
 22 Dec 2013 @ 4:13 AM 

Following my experience fiddling with this as documented Here I found the result had numerous issues. Even when running it on a separate thread there was still UI problems as a result, since it starved the UI thread even with low priorities, since it never actually yielded. It also didn’t have progress, which just sucked in general.

I revisited the problem and discovered PostGres’s “Large Object” Support. The documentation and examples were a bit basic but I managed to piece together an example that works.

When you create and store Large Objects in the postgres database, you are given a “Object ID” or OID. From the C# perspective, this is basically just an int; if you want to store large objects in your Database tables, you simply have an OID field in that table, and then store the int there. We avoid a number of problems we had in that older implementation, particularly since we no longer even need sequential access at all; since all we are retrieving is an int, we don’t have a massive resultset to send over the wire from the query.

The actual ‘Oomph’ to upload or download that data to a large object can be done in a chunk-like fashion. First, here is the sample code that adds a given file’s contents as a large Object, and returns the resulting OID:

It’s reasonably simple. There are some caveats if something like this is done in a “real” program, such as deleting any existing OID for that record, making sure you only have one LargeObjectManager per connection, etc. Another caveat is that it all needs to be handled within a Connection. I’ve been testing with a local PostGres server I started locally, and I’m a bit anxious since I’m not sure if the commit will actually make my work useless and be the single point where all the work actually happens- which would effectively means I did all this for nothing. It doesn’t work outside of a commit, so I hope, and suspect, that the commit simply finalizes the changes that were already sent over the wire. I will be a sad panda if it turns out that it actually waits for the Commit to send anything at all, but that would also be a very silly way to implement it.

Of course downloading back from ah Object ID is similar. One caveat I found with npgsql was that OID fields were read as boxed long rather than int, which meant my initial attempts to cast to int failed. I had to cast from the boxed long to a long and then to an int via (int)(long)dr[“ObjectField”]. But once the ID itself was retrieved it was a relatively simple process to reverse the Upload:

Since it’s pretty much the inverse of the Upload method, I didn’t comment it. One notable thing about both of these is that in a concurrent situation you would likely want to stick any progress updates within the while loop. In my implementation I updated the appropriate UI by firing events from the class this method was placed; progress events as it uploaded or downloaded and a completion event when it finished.

My fundamental misunderstanding was that I didn’t understand that Large objects were effectively database globals; the OID field wasn’t arbitrary byte data, it is simply a int pointer to the data within that database. The test was successful; I was able to upload and then download the data successfully. Now it’s just a matter of integrating it into the project that I needed it for, which is another matter entirely.

Posted By: BC_Programming
Last Edit: 22 Dec 2013 @ 04:13 AM

EmailPermalinkComments (0)
Categories: C#, Database, Programming

 Last 50 Posts
Change Theme...
  • Users » 43332
  • Posts/Pages » 362
  • Comments » 106
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight


    No Child Pages.

Windows optimization tips

    No Child Pages.

Software Picks

    No Child Pages.

BC’s Todo List

    No Child Pages.