FB II Compiler
Get the size of a window
I use SYSTEM(_scrnWidth) and SYSTEM(_scrnHeight) to find the width and height of the screen.
I then use those values to create a window.
I then use WINDOW(_width) and WINDOW(_height) to determine the width and height of the window.
I would have thought the window would have been exactly the same size as the screen. However the windows height and width are both 1 pixel bigger than the screen.
Why is this so?
Just trying to clean up around the edges of my project 8)
Example code follows:
PRINT "SYSTEM(_scrnWidth), SYSTEM(_scrnHeight) says screen is ";
PRINT SYSTEM(_scrnWidth); " x "; SYSTEM(_scrnHeight)
PRINT "WINDOW(_width), WINDOW(_height) says window is ";
PRINT WINDOW(_width); " x "; WINDOW(_height)
UNTIL FN BUTTON ' click mouse to exit
Remember, 0 is a number too; when you say: DIM a(10), you get 0 to 10 records, which in reality, is 11 records. That's probably the logic, but when I think of it again, it makes my head hurt so I'll leave it at that for now.
Or it could be building a window with -1,-1 to screenWidth,screenHeight; Keep in mind that the WINDOW command is a staz CODE 5 resource and not a direct toolbox routine; when you put in 0,0, it auto centers. Not saying that Apple is never wrong or Staz is or anything like that.
Looking back on it again, maybe the first answer is right. Maybe if you write your question down in the margin, some scientist will come up with some proof in 350, i guess.
I just realised the error of my ways. As Terence points out 0 is a number too, so I was really asking for a window one pixel bigger than the screen height or width.
By this I mean, say the system returns a value of 832 x 624, and I use this to draw a window of that size, I have to subtract one pixel horizontal and vertical to get a window of that size since a window starts at 0,0.
Ah, the old 0,0 problem.
Sorry for wasting evryones time.
Being dyslexic, I have real problems trying to count and always get messed up on when to start at 1 and when to start at 0. Sometimes you have to add 1 or subtract 1 and sometimes you don't. This spectre raises its ugly head on a plethora of occasions: when indexing memory locations, when indexing file locations, when developing screen scroll routines, and basically in any situation where you have to count.
Does anyone have a really good understanding of this, and can succinctly summarize the rules? All that I know is it depends on whether you start counting at zero or at 1, but even this is sometimes hard to recognize.
Ah, the old 0,0 problem.
The difference comes about when you want to enumerate (count) objects or determine a size or distance. You _usually_ start counting things with 1 not zero (you may say that you have none of something but when you are actually counting an amount of things you start with one).
For example, if the size of something is 640 units and you start counting the first one as 0 then the coordinates go from 0 to 639 so you're off by one when shifting from a _size_ to an _enumeration_. If you were assigning coordinates starting at n you add 640-1 to n to get the coordinate of the last one. Similarly, if you have some $100 bills that are consecutively numbered and you want to know how many you have but you don't want to count them you subtract the smallest (or 1st) serial number from the largest (or last) giving you an interval (or size) and then add 1 to convert the size to a count (or enumeration).
Indexes, or distances, into an array with a 0 element match the element number but in an array where the first element is labelled 1 the distance is 1 less than the index. It gets more complicated when each element is a structure having a size other than 1. In any case, you add the size of the element to index of the nth element to get the index of the n+1st.
Rather than trying to remember whether you add or subtract 1 think of what it is you are actually going to use the numbers for and then generalize from a small number - an array of 1 element indexes from 0. so an array of n elements indexes from 0 to n-1.
I don't know if I'm dyslexic or not, but at times I get all tangled up in blue at indexing time too. For me what works is to whittle down the problem to a size my meatware is comfortable with. The KISS deviant to Fudd's First Law: If you make a problem small enough, it WILL fall over.
Instead of thinking about &0000 to &FFFF or 1 to 32768 I think about 0 to 2 or 1 to 3. Then I can walk through the loop in my head and be sure everything behaves the way I want. If I'm still too foggy to handle even that (coffee not strong enough? more jumping jacks? too early to crank up the rock'n'roll?), I can scratch a picture or a little 3 element table. If I can find a pencil. Any crutch in a storm. Except flowcharts.
This way I can try starting at 0 and see if what happens is what I want. Then when I finally have it sorted out, 0 or 1, I scale it back up to the actual numbers. Burn all the evidence and look smart.
Finally, all programming software should come with an oxygen mask and a certificate for a free starter bottle.