About 1 month ago, my ISP replaced my internet box with a newer version, because of instabilities of my old box. I was very pleased to see I was able to put it anywhere at home and get connected, where the older one could only be put in a window at my attic. I could also bring it at church and get connection there, so I can now imagine some internet activities for the little children there. More: the speed was much better.
But there was a drawback. While the speed seemed better, my telnet sessions get disconnected every five minutes of being idle. That issue did not occur with my old box, and I could leave my telnet session open and idle for hours, without any drop. It was a pain to have to reconnect and enter my passwords each time, and mostly when an interactive task stopped because of idle timeout.
These telnet sessions are connections to an AS400 machine, and I used to use the xtn5250 software. xtn5250 is an opensource and "nearly full options telnet 5250 terminal emulator". I was looking at the net to find a possible solution to these session drops, and these searches lead me to fixing it myself.
It's worth noting that in the past, I have contributed an enhancement suggestion for this software, because I had to work with an AS400 where the port 23 was mapped to another port for the outside world, because of ipv4 limitations, and xtn5250 (as the proprietary Client Access on Windows) could only connect to port 23. So, I tried a solution adding
setKeepAlive(true) to the socket and it worked. No more session drops.
I am not a Java programmer, but with some research, I was able to fix an issue, and forked the original project. That's the beauty of opensource.
I have been coding with RPG/400 for more thant 12 years. These are a number of opcode I've never used, and I even didn't knew exactly what do they mean, and for which purpose may I need to use them.
One of these is FEOD : Force End Of Data.
The FEOD operation signals the logical end of data for a primary, secondary, or full procedural file. The FEOD function differs, depending on the file type and device. (For an explanation of how FEOD differs per file type and device, see the Database Guide, SC41-9659.)
FEOD differs from the CLOSE operation: the program is not disconnected from the device or file; the file can be used again for subsequent file operations without an explicit OPEN operation being specified to the file.
You can specify conditioning indicators. Factor 2 names the file to which FEOD is specified. You can specify a resulting indicator in positions 56 and 57 to be set on if the operation is not completed successfully.
To process any further sequential operations to the file after the FEOD operation (for example, READ or READP), you must reposition the file.
According to its definition, it's like a CLOSE but does not disconnect the program? In fact, this explanation does not say why it may be needed. To understand its use, we must know the mechanism that takes place when you write or read a record in an RPG program.
Physically, the program has 3 resources for storing data : the hard disk : the non-volatile storage device, the RAM buffer, and a DB layer. When you issue the WRITE operation, the record is not necessarily written to disk, it is written to the RAM buffer. Writing to hard disk happens only when the program ends, or when the RAM buffer is full, or when you ask it to be written, by using the FEOD opcode. When you try to access the record with embedded SQL, the program only look at the DB layer, which may be linked to the hard disk, not the RAM buffer. What this means is that, if you write 5 records to your file, then use embedded SQL to access the newly written records, you may not see them.
So, if you mix a SQL opcode WRITE and embedded SQL SELECT in your program, to ensure a correct synchronisation, you must issue an FEOD opcode after each WRITE. This asks the program to write data to disk without waiting for the buffer to be full. It alters performance but this can help you avoid strange results with your application.
select * from bad_customer a, customers b where a.CUSTID < 1000 and a.REGION=b.REGION and a.DEPTNO = b.DEPTNO
Table a has index on CUSTID field, table b has an index on fields REGION, DEPTNO and ZIP
Logically, no matter how big the table customers is, this statement should be performed very quickly, as fields REGION and DEPTNO are the first and second index of an index of the table customers. However, it took an eternity.
I've run the statement in STRSQL to see why this takes so long, I saw that an access path was being created for the table customers. Then, it took me 1 day to figure out the cause. On the table bad_customer, REGION was declared decimal(5, 0) although its value is always less than 10000. On the table customers however, the definition of the field REGION was decimal(4, 0). Due to this difference of field definition, the logical index could not be used to access table customers.
Replacing a.REGION with CAST(a.REGION as decimal(4, 0)) solved the issue.
select * from bad_customer a, customers b where a.CUSTID < 1000 and
cast(a.REGION as decimal(4, 0))
=b.REGION and a.DEPTNO = b.DEPTNO
When I check the file XXXXXXXX, it exists and in one of my library list. And it is not in QTEMP. So, why does the compiler issuing a RNF2120. After many tries and retries, I've issued a
fXXXXXXXX if e k disk
DSPFFDon the file to see if there was anything abnormal with the file creation. And Bingo! DSPFFD failed. It was because I created the file in SQLRPG PGM which was NOT compiled with COMMIT = *NONE, so the file exists but in a zombie state, waiting for a COMMIT or a ROLLBACK. So, I signed off, modified my SQLRPG program which creates the file by adding
recompiled and re-created the file and problem solved.
C/EXEC SQL SET OPTION COMMIT = *NONE C/END-EXEC
We needed to cleanup the iSeries, by removing all the unnecessary files. It seems to be always a good game to go haunting for large files. For me, going the IFS way was a discovery. The simple command below did all the job.
qsh cmd('find / -size +200000 -ls >/home/bigfiles.txt;')
Those familiar with Linux recognize here the find command. A subtle difference is in the number +200000, for the find command on QShell, you multiply this number by 512 to get the number of maximum bytes. Here, I'm looking for files having a size bigger than 200.000 * 512 ~ about 100MB. And it lists every file on IFS that has a size bigger than 100MB, but also every member of any physical file having a size greater than 100MB.
This takes an eternity to complete. PGM-FIND eats up to 20% UC, most of the time about 7% and for my case, this was not a real issue.