Wednesday, August 27, 2008

[ADMIN] excluding tables

Is there a way to get pg_dumpall to exclude a table or
do I have to dump the tables indivually with pg_dump? I have
one very large table that never changes and it'd be nice
not to have a copy of it in every backup (especially since
the backups get sent to another machine over the Internet).


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[pgsql-jobs] Please post this position to your group

Direct hire position in Portland, OR
 
Salary is up to $100K
 

Position Summary
The Senior Database Administrator (DBA) leads and provides the direction for database technology and administration for all database environments.  The DBA is the primary support for all development, test and production database environments to ensure high availability, reliability, appropriate capacity, scalability, backup, recoverability and performance. In addition, the Senior DBA is expected to participate in the 24x7 on-call support rotation.


Responsibilities
- Collaborate with others to determine and deploy database solutions to business prioritized requirements, including business continuance and information security along with functional requirements
- Install, configure, and test new PostgreSQL databases, perform version upgrades of existing databases
- Install, upgrade and maintain all PostgreSQL-related server and administration software
- Perform DB profiling and optimization; analyze and propose schema changes to PostgreSQL databases as needed
- Proactively monitor activity, utilization, exceptions and database health; propose and deploy appropriate improvements; automate repetitive tasks
- Collaborate with team members to create, publish and maintain database deployment methods and procedures for PostgreSQL databases
- Collaborate with the Infrastructure Architect to create and maintain database technology roadmaps and deployment plans
- Perform database backup and recovery duties; establish standards and schedules for database backups; develop and routinely test recovery procedures for each database; ensure that backup schedules meet the recovery requirements and conforms to all database and data quality policies and standards
- Work closely with project teams to ensure project success; coordinate product releases, data structure evolution, and manage and synchronize data promotions between development, test and production environments
- Act as the final point of escalation and resolution for data related issues
- Design, implement and maintain archival and high availability strategies
- Provide advanced technical support for database developers
- Provide 24x7 database support as part of a rotation of DBAs


Needed Skills and Abilities
- Exceptional knowledge of the PostgreSQL database platform
- Mastery of physical database design
- Expertise in configuring, implementing and supporting high availability strategies with database systems including clustering, and replication
- Mastery of ANSI SQL and/or PL/SQL
- Experience in software development using Perl, Shell, or other scripting languages
- Solution focused and methodical, careful attention to detail
- Solid background using UNIX operating systems – Ubuntu Linux administration a plus
- Experience with SQL Server and/or Oracle database platforms is desirable
- Ability to perform capacity planning which supports the forecasting and purchasing processes
- Ability to implement Database security safeguards and audit them
- Consistently learning about trends and features in DB solutions, able to bring best practices to bear at solving DB and overall application problems
- Ability to work in a high availability production environment where unplanned downtime is unacceptable
- Ability to communicate ideas and designs to all stakeholders, including non-technical audiences


Needed Education and Experience
- Minimum bachelor's degree in Computer Science or related field; equivalent combinations of education and experience will be considered in lieu of a degree
- Minimum five years of progressive experience performing PostgreSQL administration in a production environment; experience should include three or more years of database
administration in an enterprise level environment with very large (> 1 terabyte) databases

 

 

Susan Schmidt

Sr. Recruiter

503-598-3182 Office

503-720-8677 Cell

503-968-3198 Fax

susans@infogroupnw.com

INFOGROUP NORTHWEST
2 Centerpointe Drive, Suite 105

Lake Oswego, OR 97035

www.InfoGroupNW.com

Re: [ADMIN] WAL files during a backup

Steve, thanks for your response and your question.

Ok, here is some clarification on the WAL file name used in the example below.

The WAL file name I used in the example is actually 000000010000000000000088, and the backup file name is 000000010000000000000088.0089ED8C.backup.

The WAL file 000000010000000000000088 exists in the data/pg_xlog directory after the backup run is completed. The problem for me is that it wasn't copied to the archive directory.


Since this is a test machine and not much db activity occurring on this server, there have been 6 additional WAL files added to the pg_xlog directory in the last 6 hours since the last backup was performed.

Drew


----- Original Message ----
From: "steve@outtalimits.com.au" <steve@outtalimits.com.au>
To: windsurferdrew-pg@yahoo.com
Cc: pgsql-admin@postgresql.org
Sent: Wednesday, August 27, 2008 4:36:24 PM
Subject: Re: [ADMIN] WAL files during a backup

00000001000000000000223387 would usually be the next WAL to be written.
How often are you WALs written out?

On Wed, 27 Aug 2008 07:07:17 -0700 (PDT), windsurferdrew-pg@yahoo.com
wrote:
> I have a question regarding the WAL files that are moved during a backup
to
> the "archive directory".
>
> I have setup the postgresql.conf file to have the following archive
> command:
>
> archive_command = 'cp -i %p /var/lib/pgsql/backups/%f </dev/null'
>
> Environment:
> PG Version 8.1.4
> OS: Linux 2.6.18-8.el5
>
> After I have run the pg_hotbackup script, the backups directory contains
> only 2 files:
> 1. The gzipped tar file pg_hotbackup_<timestamp>.tar.gz, and
> 2. a WAL file ".backup" file, (for example
> 00000001000000000000223387.0089ED8C.backup)
>
> The actual WAL file did not get copied to the backups directory. From the
> example above, I would have expected at least 1 WAL file named
> 00000001000000000000223387 to be in the backups directory.
>
> My questions are:
> 1. Is this normal behavior? (perhaps no activity in the DB to cause a WAL
> file write during the backup?)
> 2. If I try to restore the DB without this WAL file, will the restore
> fail?
>
> Thanks in advance
>
> Drew
>
>
>


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

[pgsql-fr-generale] timestamp without time zone

Bonjour,

J'ai prévu d'enregistrer des dates dans mes tables au format timestamp
without time zone
Il me semble que c'est un bon choix, j'ai besoin de année mois jour
heure minute seconde.

Après avoir lu http://docs.postgresqlfr.org/8.3/functions-datetime.html

J'ai essayé:
$query_insert = "INSERT INTO \"TTest\" ( \"Champ1\", \"Champ2\",
\"CTimestamp_wtz\") VALUES ( '$valeur1', '$valeur2', LOCALTIMESTAMP";
Malheureusement j'ai un problème de syntaxe à la fin de la ligne
LOCALTIMESTAMP^

Ce que je veux faire c'est enregistrer l'heure à laquelle une requête
est faite.


Cordialement
--
Pascal

--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale

[INTERFACES] FW: Libpq and the SQL RETURNING clause

 


From: Brefort, Georges R&D/FR
Sent: Tuesday, August 26, 2008 2:41 PM
To: pgsql-interfaces@postgresql.org
Subject: Libpq and the SQL RETURNING clause

Dear Postgres specialist,
 
1 Assume you have a table such as the following:
 
CREATE TABLE example( id SERIAL, someData VARCHAR(32) );
 
2 Now, you use a SQL statement to insert the data and get the the id value back:
 
INSERT INTO example( someData ) VALUES( 'BlaBlaBla' ) RETURNING id; 
 
I wonder if it is possible to retrieve the value of the id field in an application using the C libpq library.
 
I pretty much understand how to bind data with the libpq, or how to retrieve result sets from a SELECT SQL statement, but not how to retrieve data from that RETURNING clause. (I have many years of practice of the RETURNING clause in the ORACLE C library, but that is not the point here).
 
Thank you very much for your help and your patience.
With my best regards.
 
Georges BREFORT

Re: [GENERAL] Dumping/Restoring with constraints?

In response to "Phoenix Kiula" <phoenix.kiula@gmail.com>:

> > The recommended solution is to use a regular schema-and-data dump.
> > If you really have to separate schema and data, the --disable-triggers
> > switch might help, though you open yourself to the possibility of
> > loading inconsistent data that way.
>
> Thanks Tom.
>
> This is the dump command being used on a 8.2.3 database on Linux:
>
> $ pg_dumpall > mydb.sql
> $ umask 077
> $ gzip mydb.sql
>
> Then I download the mydb.sql.gz file into my local computer (Mac OSX
> with 8.3.3) and unzip it to mydb.sql.
>
> The local database is already created, with all tables and constraints and all.

Don't do that. Do one of the following:
*) Allow the dump file to create all tables.
*) In the early step dump the data only with pg_dumpall --disable-triggers -a

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@collaborativefusion.com
Phone: 412-422-3463x4023

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] [GENERAL] PITR - base backup question

On Wed, Aug 27, 2008 at 9:52 AM, Michael Nolan <htfoot@gmail.com> wrote:
>> This is a nice touch. With a little bash-fu you could do a find |
>> xargs rm and list/kill the files in one pass. In the standby setups
>> I've done I usually script the whole process, a prep on the main and a
>> startup on the standby.
>
> The scripts to create the system level backups and copy them to the backup
> servers and to rsync the WAL files are both cron jobs.
>
> I've considered writing a script to do all the restore tasks on the backup
> server, but I figure if someone other than me ever has to do it, for example
> if there is a problem when I'm unavailable, that person will probably want
> to watch the entire process carefully, since he or she will be far less
> familiar with the configuration.

Here are some basic facts of life about PITR/log shipping. It is a
disaster prevention feature. Here's the scenario:

You are going to depend upon it on 3 a.m. early saturday morning three
years in the future, only you are not going to be there. A drive just
went out on the main, but instead of degrading the entire backplane
went amber. You are going to be in Barbados on your honeymoon, with
no connectivity to the outside world (your blushing bride made you
leave the laptop at home). The guy doing the switchover to the
standby is the only one that could be gotten a hold of, he still
hasn't gotten over the 12 hour bender from Friday. He's never really
understood why your company took your advice and went with PostgreSQL
instead of SQL Server, is cranky, and doesn't like you that much. He
secretly hopes the standby wont come up and barely knows how to use a
console.

write the script.
test it.

merlin

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [HACKERS] pg_dump roles support

diff -ur postgresql-8.3.1.orig/doc/src/sgml/backup.sgml postgresql-8.3.1/doc/src/sgml/backup.sgml
--- postgresql-8.3.1.orig/doc/src/sgml/backup.sgml 2008-03-07 02:46:50.000000000 +0100
+++ postgresql-8.3.1/doc/src/sgml/backup.sgml 2008-08-27 15:29:26.000000000 +0200
@@ -68,7 +68,9 @@
<application>pg_dump</> will by default connect with the database
user name that is equal to the current operating system user name. To override
this, either specify the <option>-U</option> option or set the
- environment variable <envar>PGUSER</envar>. Remember that
+ environment variable <envar>PGUSER</envar>. It is possible to change
+ the current user identifier of the dump session by using the
+ <option>--role</option> option. Remember that
<application>pg_dump</> connections are subject to the normal
client authentication mechanisms (which are described in <xref
linkend="client-authentication">).
diff -ur postgresql-8.3.1.orig/doc/src/sgml/ref/pg_dump.sgml postgresql-8.3.1/doc/src/sgml/ref/pg_dump.sgml
--- postgresql-8.3.1.orig/doc/src/sgml/ref/pg_dump.sgml 2007-12-11 20:57:32.000000000 +0100
+++ postgresql-8.3.1/doc/src/sgml/ref/pg_dump.sgml 2008-08-27 15:58:05.000000000 +0200
@@ -522,6 +522,18 @@
</varlistentry>

<varlistentry>
+ <term><option>--role=<replaceable class="parameter">rolename</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the user identifier used by the dump session. This will cause
+ <application>pg_dump</application> to issue a
+ <command>SET ROLE TO <replaceable class="parameter">rolename</replaceable></command>
+ command just after a successful database connection.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-v</></term>
<term><option>--verbose</></term>
<listitem>
diff -ur postgresql-8.3.1.orig/doc/src/sgml/ref/pg_dumpall.sgml postgresql-8.3.1/doc/src/sgml/ref/pg_dumpall.sgml
--- postgresql-8.3.1.orig/doc/src/sgml/ref/pg_dumpall.sgml 2007-12-11 20:57:32.000000000 +0100
+++ postgresql-8.3.1/doc/src/sgml/ref/pg_dumpall.sgml 2008-08-27 15:49:18.000000000 +0200
@@ -248,6 +248,18 @@
</varlistentry>

<varlistentry>
+ <term><option>--role=<replaceable class="parameter">rolename</replaceable></option></term>
+ <listitem>
+ <para>
+ Specifies the user identifier used by the dump session. This option will be passed
+ to <application>pg_dump</> and will cause <application>pg_dump</application> to issue a
+ <command>SET ROLE TO <replaceable class="parameter">rolename</replaceable></command>
+ command just after a successful database connection.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
<term><option>-v</></term>
<term><option>--verbose</></term>
<listitem>
diff -ur postgresql-8.3.1.orig/doc/src/sgml/release.sgml postgresql-8.3.1/doc/src/sgml/release.sgml
--- postgresql-8.3.1.orig/doc/src/sgml/release.sgml 2008-03-14 00:47:59.000000000 +0100
+++ postgresql-8.3.1/doc/src/sgml/release.sgml 2008-08-27 16:06:12.000000000 +0200
@@ -2395,6 +2395,13 @@

<listitem>
<para>
+ Add <literal>--role</> option to <application>pg_dump</application> and
+ <application>pg_dumpall</application> (Benedek Laszlo)
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
Add <literal>--tablespaces-only</> and <literal>--roles-only</>
options to <application>pg_dumpall</application> (Dave Page)
</para>
diff -ur postgresql-8.3.1.orig/src/bin/pg_dump/pg_dump.c postgresql-8.3.1/src/bin/pg_dump/pg_dump.c
--- postgresql-8.3.1.orig/src/bin/pg_dump/pg_dump.c 2008-01-30 19:35:55.000000000 +0100
+++ postgresql-8.3.1/src/bin/pg_dump/pg_dump.c 2008-08-27 15:10:41.000000000 +0200
@@ -208,6 +208,7 @@
const char *pgport = NULL;
const char *username = NULL;
const char *dumpencoding = NULL;
+ const char *pgrole = NULL;
const char *std_strings;
bool oids = false;
TableInfo *tblinfo;
@@ -258,6 +259,7 @@
{"no-acl", no_argument, NULL, 'x'},
{"compress", required_argument, NULL, 'Z'},
{"encoding", required_argument, NULL, 'E'},
+ {"role", required_argument, NULL, 'r' + 0x80},
{"help", no_argument, NULL, '?'},
{"version", no_argument, NULL, 'V'},

@@ -437,6 +439,10 @@
/* This covers the long options equivalent to -X xxx. */
break;

+ case 'r' + 0x80: /* role */
+ pgrole = optarg;
+ break;
+
default:
fprintf(stderr, _("Try \"%s --help\" for more information.\n"), progname);
exit(1);
@@ -539,6 +545,18 @@
exit(1);
}
}
+
+ /* Set the role if requested */
+ if (pgrole)
+ {
+ PQExpBuffer roleQry = createPQExpBuffer();
+ appendPQExpBuffer(roleQry, "SET ROLE TO %s;\n", fmtId(pgrole));
+ PGresult *res = PQexec(g_conn, roleQry->data);
+ check_sql_result(res, g_conn, roleQry->data, PGRES_COMMAND_OK);
+
+ PQclear(res);
+ destroyPQExpBuffer(roleQry);
+ }

/*
* Get the active encoding and the standard_conforming_strings setting, so
@@ -771,6 +789,7 @@
printf(_(" --use-set-session-authorization\n"
" use SESSION AUTHORIZATION commands instead of\n"
" ALTER OWNER commands to set ownership\n"));
+ printf(_(" --role set role before dump\n"));

printf(_("\nConnection options:\n"));
printf(_(" -h, --host=HOSTNAME database server host or socket directory\n"));
diff -ur postgresql-8.3.1.orig/src/bin/pg_dump/pg_dumpall.c postgresql-8.3.1/src/bin/pg_dump/pg_dumpall.c
--- postgresql-8.3.1.orig/src/bin/pg_dump/pg_dumpall.c 2008-01-01 20:45:55.000000000 +0100
+++ postgresql-8.3.1/src/bin/pg_dump/pg_dumpall.c 2008-08-27 15:12:00.000000000 +0200
@@ -112,6 +112,7 @@
{"password", no_argument, NULL, 'W'},
{"no-privileges", no_argument, NULL, 'x'},
{"no-acl", no_argument, NULL, 'x'},
+ {"role", required_argument, NULL, 'r' + 0x80},

/*
* the following options don't have an equivalent short option letter
@@ -241,6 +242,14 @@
roles_only = true;
break;

+ case 'r' + 0x80:
+#ifndef WIN32
+ appendPQExpBuffer(pgdumpopts, " --role '%s'", optarg);
+#else
+ appendPQExpBuffer(pgdumpopts, " --role \"%s\"", optarg);
+#endif
+ break;
+
case 's':
schema_only = true;
appendPQExpBuffer(pgdumpopts, " -s");
@@ -505,7 +514,8 @@
printf(_(" --use-set-session-authorization\n"
" use SESSION AUTHORIZATION commands instead of\n"
" OWNER TO commands\n"));
-
+ printf(_(" --role set role before dump\n"));
+
printf(_("\nConnection options:\n"));
printf(_(" -h, --host=HOSTNAME database server host or socket directory\n"));
printf(_(" -l, --database=DBNAME specify an alternative default database\n"));
Hello,

daveg wrote:
>> I created a patch to set the role to a specified name just after the db
>> connection.
>>
> I was going to do this, but you have beat me to it. You will want to update
> the sgml documentation, and pg_dumpall as well.
>
> -dg
>
>
Ok, here is the next one.
pg_dumpall now just passes the --role option to pg_dump. What do you
think, is it enough
or it should issue the SET ROLE TO ... command in its own session too?

Laszlo Benedek

Re: [GENERAL] Dumping/Restoring with constraints?

On Wednesday 27 August 2008 09:36, Phoenix Kiula wrote:
> On Wed, Aug 27, 2008 at 9:29 PM, Terry Lee Tucker <terry@chosen-ones.org>
wrote:
> > We have all sorts of constraints and foreign keys and we have never had
> > any problem with pg_restore related to dumping such that foreign keys are
> > satisfied. You must have data already in the database that violates the
> > restraints. You can restore in two phases; that is, by restoring the
> > schema, and then the data using --disable-triggers. I'm assuming you are
> > doing a binary dump. See the man page for pg_restore.
>
> Thanks for this. I don't have any foreign key violations in my
> existing database. I think the violation is happening because upon
> restoring the table that is being populated checks in another table
> that doesn't yet have data.
>
> I am not using pg_restore. I am just using "psql --file=FILENAME"
> syntax. Is that an issue?

The errors you are having, then, must be related to your own trigger code. It
sounds like you will need to prevent those triggers from firing and the only
way I know how to accomplish that is to do a binary dump and then use
pg_restore as I indicated earlier. There is no way to disable triggers in
your method referenced above.
--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987
terry@turbocorp.com
www.turbocorp.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] BUG #4381: Postgresql daemon won't stay in the foreground

That did it (was set to silent_mode = on by default).

Thank you very much.

Tom Lane wrote:
> Tom Wright <tom.c.wright@gmail.com> writes:
>
>> I'm starting it with "postgres -D <datadir>", and it immediately forks
>> into the background....
>>
>
> Perhaps you have silent_mode set in the configuration file? AFAIK the
> postmaster will never fork away from the caller unless that's been set
> in one way or another.
>
> regards, tom lane
>
>


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] Dumping/Restoring with constraints?

> The recommended solution is to use a regular schema-and-data dump.
> If you really have to separate schema and data, the --disable-triggers
> switch might help, though you open yourself to the possibility of
> loading inconsistent data that way.


Thanks Tom.

This is the dump command being used on a 8.2.3 database on Linux:

$ pg_dumpall > mydb.sql
$ umask 077
$ gzip mydb.sql

Then I download the mydb.sql.gz file into my local computer (Mac OSX
with 8.3.3) and unzip it to mydb.sql.

The local database is already created, with all tables and constraints and all.

Here is the command I use to restore:

$ psql -d mydb -U myuser -h localhost --file=mydb.sql

Is this not the recommended method?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] Error en instalación 8.3

Sergio Valdes Hurtado escribió:
> José Antonio,
>
> yo soy uno de los que reportó el mismo error que tu tienes y jamás
> recibí respuesta de nadie, por lo que tuvimos que desechar a
> Postgresql como la base de datos que usamos en un proyecto.
>
> Espero que tu pregunta esta vez si tenga una respuesta y una solución.

Acá no tenemos mucha experiencia con los instaladores Windows. Te
recomiendo que reportes en otra parte donde sí la tenga, por ej. en el
sitio del instalador http://pgfoundry.org/projects/pginstaller/

O quizás en pgsql-bugs, posiblemente a través de
http://www.postgresql.org/support/submitbug

--
Alvaro Herrera http://www.flickr.com/photos/alvherre/
"The problem with the future is that it keeps turning into the present"
(Hobbes)
--
TIP 10: no uses HTML en tu pregunta, seguro que quien responda no podrá leerlo

Re: [PERFORM] Is there a way to SubPartition?

On Wed, 27 Aug 2008, Jerry Champlin wrote:
> After it's a day old, there are no longer any updates or inserts and we
> can vacuum it at that point.

A pattern that has worked very well for other people is to have two
separate tables (or partitions). One contains today's data, and the other
contains historic data that is no longer updated. Once a day, transfer the
data between the partitions, and the historic data partition will not need
vacuuming.

Some changes to your code will be needed however.

Matthew

--
Vacuums are nothings. We only mention them to let them know we know
they're there.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-es-ayuda] RECARGAR PG_HBA

Es primera vez que escribo a la lista , espero que me puedan ayudar .

Necesito saber el rendimiento de postgresql  , ya que necesito hacer un desarrollo de formularios y mi jefe no cree mucho en las bondades que le cuento de post y quiero demostrarle con documentacion lo bien que anda esta bd .

Saludos a todos
y espero que me puedan ayudar

Re: [HACKERS] Is it really such a good thing for newNode() to be a macro?

"Heikki Linnakangas" <heikki@enterprisedb.com> writes:
> Tom Lane wrote:
>> I happened to be looking at nodes.h and started wondering just how
>> sane this coding really is:

> Note that the MemSetLoop macro used in palloc0fast is supposed to be
> evaluated at compile time,

Oooh, good point, I had forgotten about that little detail. Yeah,
we'll lose that optimization if we move the code out-of-line.

So I guess a fallback position is #if __gcc__ use a "static inline"
function, else the existing code. That would at least let us get
rid of the global-variable assignment in gcc-based builds.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [BUGS] BUG #4377: casting result of timeofday() to timestamp fails in some timezones

Tom Lane wrote:
>
> In the meantime, why aren't you just using clock_timestamp()?
> timeofday() is deprecated.

I am using clock_timestamp() now. The use of timeofday() was from the era
when 8.0 was hot stuff.

BTW, the word "deprecated" does not appear on the docs page where timeofday()
is listed (http://www.postgresql.org/docs/8.3/static/functions-datetime.html),
and there doesn't seem to be anything in the context of the 3 occurrences of
"timeofday" that would imply it is deprecated.

-- todd


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [PERFORM] Is there a way to SubPartition?

Jerry Champlin <jchamplin@absolute-performance.com> writes:
> Is there a way to use multi-level inheritance to achieve sub
> partitioning that the query optimizer will recognize?

No, I don't think so. How would that make things any better anyway?
You're still going to end up with the same very large number of
partitions.

regards, tom lane

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-es-ayuda] RECARGAR PG_HBA

Fabio Arias escribió:
> Listeros, pero esto reiniciaria mi base de datos,

No.

--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"¿Que diferencia tiene para los muertos, los huérfanos, y aquellos que han
perdido su hogar, si la loca destrucción ha sido realizada bajo el nombre
del totalitarismo o del santo nombre de la libertad y la democracia?" (Gandhi)
--
TIP 9: visita nuestro canal de IRC #postgresql-es en irc.freenode.net

Re: [BUGS] BUG #4381: Postgresql daemon won't stay in the foreground

Tom Wright <tom.c.wright@gmail.com> writes:
> I'm starting it with "postgres -D <datadir>", and it immediately forks
> into the background....

Perhaps you have silent_mode set in the configuration file? AFAIK the
postmaster will never fork away from the caller unless that's been set
in one way or another.

regards, tom lane

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [ADMIN] [GENERAL] PITR - base backup question

--- On Tue, 8/26/08, Richard Broersma <richard.broersma@gmail.com> wrote:

> From: Richard Broersma <richard.broersma@gmail.com>
> Subject: [GENERAL] PITR - base backup question
> To: "pgsql-general@postgresql.org" <pgsql-general@postgresql.org>, pgsql-admin@postgresql.org
> Date: Tuesday, August 26, 2008, 10:53 PM
> From the following link:
> http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html#BACKUP-BASE-BACKUP
>
> Step 3 says to perform the back up.
>
> Does this mean a File System Backup of the Data
> directory?
> OR
> Does this mean performing a pg_dumpall and backing up
> the dump file?
>
is a file system backup of he data directory
> --
> Regards,
> Richard Broersma Jr.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [GENERAL] Dumping/Restoring with constraints?

"Phoenix Kiula" <phoenix.kiula@gmail.com> writes:
> Hello. I have a database dump. With data and schema, which includes
> all the constraints and rules. But it seems the pg_dumpall command
> does not copy data in such a way that foreign keys are satisfied upon
> restoring. Because tables are inter-related, importing them keep
> giving errors and eventually no data is imported.

This shouldn't be possible in a regular dump, at least not with any
remotely modern version of PG. However, if you are trying to restore a
data-only dump into an already-created set of tables, it is possible
because pg_dump doesn't know how to order the data loads in that case.
(The problem may in fact be insoluble, since circular foreign key
relationships are allowed.)

The recommended solution is to use a regular schema-and-data dump.
If you really have to separate schema and data, the --disable-triggers
switch might help, though you open yourself to the possibility of
loading inconsistent data that way.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [austinpug] Meeting reminder

On Aug 26, 2008, at 8:55 PM, Decibel! wrote:
> The topic will be free-form ala a Bar Camp, so bring something
> you'd like to present about. Tips, tricks, cool stuff you / your
> company is doing, anything Postgres related is fair game!


Wiki is up at http://wiki.lerctr.org/index.php/2008-09-02!
--
Decibel!, aka Jim C. Nasby, Database Architect decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [GENERAL] PITR - base backup question



On Wed, Aug 27, 2008 at 8:32 AM, Merlin Moncure <mmoncure@gmail.com> wrote:

 
> 3.  Shut down the Postgresql server running on the backup server, if any
>          pg_ctl stop
>     (Use 'ps ax' to make sure the server is stopped.)

probably pg_ctl -m fast stop or -m immediate...since we are overwriting it.

Good point, but sometimes there are tasks running on the backup server (recently I did a test of a revised procedure that took 5 days to run.)  I'll probably update the cookbook guide to deal with that possibility.


> 5.  Delete the entire contents of the /disk1/postgres/data directory tree.
>     MAKE ABSOLUTELY SURE YOU ARE IN THE /disk1/postgres/data directory!
>
>      cd /disk1/postgres/data
>      pwd
>      rm -rf *

why not just rm -rf /disk1/postgres/data?

I prefer the visual confirmation that I am indeed in the right directory rather than risk a typo in the longer rm command.

> 6.  Restore the tar file for the low-level backup from the live server
>
>     tar xvf /usr/local/pgsql/tardir/pg_part1.tar
>
>     (This restore may take 2-3 hours)
>
> 7.  Remove the PostgreSQL log file and the WAL files that were restored
>     from the tar archive

I prefer to exclude WAL files from the original tar, and recreate the
folders here (being careful to chown them to postgres account).  Every
little bit helps.

Another good point.

> 12. Go to the directory where the WAL files have been archived on
>     the server and remove all files older than the file matching the
>     last .backup file.  The fastest way to do this is as follows:
>
>     cd /usr/local/pgsql/archivedir
>     ls -1 > files

This is a nice touch.  With a little bash-fu you could do a find |
xargs rm and list/kill the files in one pass.  In the standby setups
I've done I usually script the whole process, a prep on the main and a
startup on the standby.

The scripts to create the system level backups and copy them to the backup servers and to rsync the WAL files are both cron jobs.

I've considered writing a script to do all the restore tasks on the backup server, but I figure if someone other than me ever has to do it, for example if there is a problem when I'm unavailable, that person will probably want to watch the entire process carefully, since he or she will be far less familiar with the configuration.

As disk space permits, I actually keep TWO sets of the tar archive base files, the most recent one and the one from the previous week.  That way I could go back further for a PITR recovery up to some incident.  I've never needed to do that and I hope I never do, but it's nice to have that capability just in case.
--
Mike Nolan

Re: [HACKERS] Is it really such a good thing for newNode() to be a macro?

Tom Lane wrote:
> I happened to be looking at nodes.h and started wondering just how
> sane this coding really is:
>
> extern PGDLLIMPORT Node *newNodeMacroHolder;
>
> #define newNode(size, tag) \
> ( \
> AssertMacro((size) >= sizeof(Node)), /* need the tag, at least */ \
> newNodeMacroHolder = (Node *) palloc0fast(size), \
> newNodeMacroHolder->type = (tag), \
> newNodeMacroHolder \
> )
>
> Given that we're calling palloc, it's not clear that saving one level of
> function call is really buying much; and what it's costing us is a store
> to a global variable that the compiler has no way to optimize away.
> On a lot of platforms, accessing global variables isn't especially
> cheap. Also, considering that palloc0fast is a nontrivial macro, and
> that there are a LOT of uses of newNode(), we're paying rather a lot of
> code space for a pretty dubious savings.

Note that the MemSetLoop macro used in palloc0fast is supposed to be
evaluated at compile time, so the code space taken by that macro isn't
that big. Turning newNode into function would force it to be evaluated
at run-time instead.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] loop vs. aggregate was: update and group by/aggregate

On Wed, Aug 27, 2008 at 02:58:18PM +0200, Ivan Sergio Borgonovo wrote:
> On Wed, 27 Aug 2008 12:45:42 +0100 Sam Mason <sam@samason.me.uk> wrote:
> > Have you been vacuuming (non-full) between runs? and as always,
> > are the stats reasonably up to date?
>
> there is autovacuum running regularly and I:
> vacuumed full, analyze and reindexed everything recently:
> that means that all changes to DB were done BEFORE maintenance stuff
> but then I executes several selects on tables (including the ones
> involved).
> I tried to run the function and the statement 3 times one after the
> other... so they should have been in similar situation.

I'd probably start by doing a normal vacuum, then run a command that
touches every row then run a normal vacuum again. between every
iteration you want to run a normal vaccum otherwise the later runs will
see more garbage than the previous run (updating a row leaves a dead row
behind).

> > > Can anybody explain why aggregates under perform so badly?
>
> > You could try just running the SELECT part to see how long the
> > aggregation takes.
>
> 33sec
> Surprising.

OK, so it's the UPDATE that's taking the time. I'm not experienced
enough to say why though.

> > It's quite easy to outsmart the planner with large amounts of
> > data, but it's surprising how well it does most of the time.
> > Generally you can just write whatever is obvious and the planer
> > will do something good with it. If it doesn't do the right thing
> > then you can worry about performance, rather than most languages
> > where you have to worry about performance from the start.
>
> I really thought that in this case the planner was going to outsmart
> me since well I think in that statement it could see a lot more
> optimisation than me knowing the nature of the data.

an experienced human thinking for several seconds (or probably minutes
in this example) is almost always going to be better than a computer
thinking for a millisecond.

> > have you set work_mem to some obscenely big value?
>
> 32Mb

OK. It's just that it was doing a hash aggregation involving 160MB of
data. That will cause it to disk and I'd think it would prefer to do
something else.

> Tom suggested to raise that number in the range of tens of Mb for
> another problem. I saw the hot spot was at 16Mb and considering I
> was expecting the DB to get bigger I increased it to 32Mb.

32MB seems a reasonable default on modern hardware.

> > You're trimming an extra close square bracket (']') in one of the
> > trim statements and not in the other. If you just do it in one
> > place then you don't have to worry about inconsistency.
>
> Guess: that was a typo. regexp brain context switching :)

I do the same thing far too often, hence I tend to do similar query
rewrites, as I did on yours, to prevent this (human bug/feature) from
happening.

> I do know Knuth and I think I share his opinions. I don't know
> "modern programming people" and I'm alien to the concept of
> "fundamental good".

A lot of programming courses will try and teach you to remove all
constraints from your code, whether they're going to be hit or not.
In the real world, resources are finite and effort has to be spent
appropriately.

In my eyes this is the difference between computer science and software
engineering. Software engineering is just interested in making
something that works now, computer science is about pushing back of the
boundaries of what's possible. Too often the two get confused.

> But well, I'm here to learn. Could you point me to some explanation
> on why it should be a "fundamental good" in DB context?

as in why using TEXT is good over a large VARCHAR ? it's an engineering
choice normally. Some database systems optimize one or the other a
lot more, so people tend to prefer one for arbitrary reasons. In PG
it doesn't really matter in terms of performance and you should use
whichever expresses the data you're trying to store appropriately.

> I do think that most of the time it is worth (and part of the
> problem) to make a reasonable forecast. If I can't really make one
> or it requires too much effort to make a reasonable one at least I
> start developing with an "unreasonable" constraint just to catch some
> errors earlier, drop the constraint and leave a comment.

Yes, however this requires experience of what's worked so far and is
difficult to teach.


Sam

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[PERFORM] Is there a way to SubPartition?

Is there a way to use multi-level inheritance to achieve sub
partitioning that the query optimizer will recognize? With our current
application design, we would need a partition for every other day for
18 months which will not perform well. The reason we need so many
partitions is that we can't afford to vacuum the active partition (750MM
inserts + updates per day is the performance requirement for 12 months
out). After it's a day old, there are no longer any updates or inserts
and we can vacuum it at that point. If multi-level partitioning
worked, we could solve this problem without changing our code. Ideas?

-Jerry

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [pgsql-es-ayuda] RECARGAR PG_HBA

Listeros, pero esto reiniciaria mi base de datos, no existe una manera de recarga el pg_hba desde la consola de psql>

2008/8/26 Agustin Casiva <casivaagustin@gmail.com>

2008/8/26 Fabio Arias <fharias@gmail.com>:
> Muy buenas tardes Listeros, mi pregunta es la siguiente, como puedo recarga
> la configuracion del pg_hba.conf sin parar mi base de datos.
>

También, dependiendo del SO puedes por ejemplo, si estas en debian, hacer

#/etc/init.d/postgresql-8.3 reload

O en Windows desde el menú inicio ->programas->postgresql->Service Reload (o algo así tiene no lo recuerdo con certeza).

Fijate como se hace en tu SO.

Saludos

--

Casiva  Agustin

Mail/Msn: casivaagustin@gmail.com
http://www.casivaagustin.com.ar



--
Fabio Hernando Arias Vera
Cel. 314 411 7776

Re: [HACKERS] Another refactoring proposal: move stuff into nodes/nodeFuncs.[ch]

Simon Riggs <simon@2ndQuadrant.com> writes:
> On Mon, 2008-08-25 at 11:24 -0400, Tom Lane wrote:
>> The advantages of doing this would be (a) reduce the number of places
>> to look in when implementing a new node type; (b) eliminate some
>> cross-subsystem #inclusions that weaken modularity of the backend.

> Are we doing either of those things in this release?

Yes, it's done already.

> Might these changes screw up patches already in progress? Can we hold
> off making these changes until we're sure the latter isn't true?

Some patches might need small adjustments (to find the code in a
different file) but that does not strike me as an argument for not
changing things. We have applied far more invasive patches in the past
and undoubtedly will do so again in future. In any case, it's still a
long way until beta freeze, so there's plenty of time to deal with
fallout. I don't see that some other time would be better.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [GENERAL] Dumping/Restoring with constraints?

On Wed, Aug 27, 2008 at 9:29 PM, Terry Lee Tucker <terry@chosen-ones.org> wrote:

> We have all sorts of constraints and foreign keys and we have never had any
> problem with pg_restore related to dumping such that foreign keys are
> satisfied. You must have data already in the database that violates the
> restraints. You can restore in two phases; that is, by restoring the schema,
> and then the data using --disable-triggers. I'm assuming you are doing a
> binary dump. See the man page for pg_restore.

Thanks for this. I don't have any foreign key violations in my
existing database. I think the violation is happening because upon
restoring the table that is being populated checks in another table
that doesn't yet have data.

I am not using pg_restore. I am just using "psql --file=FILENAME"
syntax. Is that an issue?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [pgsql-es-ayuda] Error en instalación 8.3

Sergio Valdes Hurtado escribió:
> José Antonio,
>
> yo soy uno de los que reportó el mismo error que tu tienes y jamás
> recibí respuesta de nadie, por lo que tuvimos que desechar a
> Postgresql como la base de datos que usamos en un proyecto.
>
> Espero que tu pregunta esta vez si tenga una respuesta y una solución.
>
> atte.
>
> El día 24 de agosto de 2008 12:56, Jose Antonio Pons
> <jose.a.pons@gmail.com> escribió:
>
>> Saludos a la lista.
>>
>> Desde hace algún tiempo tenía instalado PostgreSQL 8.2.0 en un Windows
>> Server 2003 R2 SP2, para correr sobre él un ERP que estoy testeando. Para
>> probar si este software aceptaba también correr sobre 8.3, he procedido a
>> desinstalar en el entorno de pruebas e instalar de nuevo.
>>
>> En resumen, el problema con el que he topado es el siguiente: Se lanza la
>> instalación desde desde una sesión de usuario con derechos totales
>> (Administrador). Cuando se está completando la instalación e intenta iniciar
>> la BD, la operación se aborta ("Rolling back action") y se cancela la copia
>> de archivos y el resto de configuraciones; finalmente aparece este mensaje
>> de "FatalError": "Installation ended prematurely because of an error". Sólo
>> permanece el usuario del sistema creado para la instalación.
>> Lo he intentando con las tres versiones de la serie (8.3.0, 8.3.1 y 8.3.3)
>> con el mismo resultado. Para descartar cualquier interferencia de
>> instalaciones anteriores he formateado y reinstalado el equipo desde cero,
>> pero el error se repite.
>> Al final, he conseguido instalar sin problemas la versión 8.2.9.
>> Este error se repite también en el proceso de instalación en otro equipo con
>> Windows XP SP2.
>>
>> Buscando información en los mensajes de la lista, he encontrado que el
>> problema ya se citaba en los posts siguientes y que se quedaron sin
>> solución:
>>
>> http://archives.postgresql.org/pgsql-es-ayuda/2008-02/msg00524.php
>> http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00363.php
>> http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00482.php
>> http://archives.postgresql.org/pgsql-es-ayuda/2008-06/msg00310.php
>>
>> Agradeceré cualquier información sobre el tema.
>> Saludos.
>>
>> José Antonio Pons
>>
>>
>>
>> Textos de los dos avisos de error que aparecen en el visor de sucesos:
>>
>> Tipo de suceso: Información
>> Origen del suceso: MsiInstaller
>> Categoría del suceso: Ninguno
>> Id. suceso: 11708
>> Fecha: 21/08/2008
>> Hora: 18:14:05
>> Usuario: NET001P01\Administrador
>> Equipo: NET001P01
>> Descripción:
>> Product: PostgreSQL 8.3 -- Installation failed.
>> Datos:
>> 0000: 7b 42 38 32 33 36 33 32 {B823632
>> 0008: 46 2d 33 42 37 32 2d 34 F-3B72-4
>> 0010: 35 31 34 2d 38 38 36 31 514-8861
>> 0018: 2d 42 39 36 31 43 45 32 -B961CE2
>> 0020: 36 33 32 32 34 7d 63224}
>>
>> Tipo de suceso: Información
>> Origen del suceso: MsiInstaller
>> Categoría del suceso: Ninguno
>> Id. suceso: 11708
>> Fecha: 21/08/2008
>> Hora: 18:11:42
>> Usuario: NET001P01\Administrador
>> Equipo: NET001P01
>> Descripción:
>> Product: PostgreSQL MUI Wrapper 8.3 -- Installation failed.
>> Datos:
>> 0000: 7b 34 41 46 46 42 39 46 {4AFFB9F
>> 0008: 38 2d 41 30 37 31 2d 34 8-A071-4
>> 0010: 35 44 45 2d 42 36 35 39 5DE-B659
>> 0018: 2d 36 45 32 45 34 44 45 -6E2E4DE
>> 0020: 32 42 41 42 44 7d 2BABD}
>>
> --
> TIP 7: no olvides aumentar la configuración del "free space map"
>
Estimados una consulta ¿Que erp estas tratando de instalar? porque a mi
me paso algo igual con openbravo y lo que hice fue eliminar los
registros de instalacion previa, eliminar los usuarios y

eliminar todo lo que genera el instalador del mismo y volvi a corre el
proceso....
--
TIP 6: ¿Has buscado en los archivos de nuestra lista de correo?
http://archives.postgresql.org/pgsql-es-ayuda

Re: [GENERAL] PITR - base backup question

On Wed, Aug 27, 2008 at 9:18 AM, Michael Nolan <htfoot@gmail.com> wrote:
> I have what I have sometimes called a 'tepid spare' backup. Once a week I
> copy the physical files over to another system (actually to two of them) and
> every few hours I make sure the archived WAL log files are in sync (using
> rsync.)


I have a couple of comments...see below:

> 3. Shut down the Postgresql server running on the backup server, if any
> pg_ctl stop
> (Use 'ps ax' to make sure the server is stopped.)

probably pg_ctl -m fast stop or -m immediate...since we are overwriting it.

> 5. Delete the entire contents of the /disk1/postgres/data directory tree.
> MAKE ABSOLUTELY SURE YOU ARE IN THE /disk1/postgres/data directory!
>
> cd /disk1/postgres/data
> pwd
> rm -rf *

why not just rm -rf /disk1/postgres/data?

> 6. Restore the tar file for the low-level backup from the live server
>
> tar xvf /usr/local/pgsql/tardir/pg_part1.tar
>
> (This restore may take 2-3 hours)
>
> 7. Remove the PostgreSQL log file and the WAL files that were restored
> from the tar archive

I prefer to exclude WAL files from the original tar, and recreate the
folders here (being careful to chown them to postgres account). Every
little bit helps.


> 12. Go to the directory where the WAL files have been archived on
> the server and remove all files older than the file matching the
> last .backup file. The fastest way to do this is as follows:
>
> cd /usr/local/pgsql/archivedir
> ls -1 > files

This is a nice touch. With a little bash-fu you could do a find |
xargs rm and list/kill the files in one pass. In the standby setups
I've done I usually script the whole process, a prep on the main and a
startup on the standby.

merlin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Dumping/Restoring with constraints?

On Wednesday 27 August 2008 09:19, Phoenix Kiula wrote:
> Hello. I have a database dump. With data and schema, which includes
> all the constraints and rules. But it seems the pg_dumpall command
> does not copy data in such a way that foreign keys are satisfied upon
> restoring. Because tables are inter-related, importing them keep
> giving errors and eventually no data is imported. Neither pg_dumpall
> nor pg_restore seems to have a "without constraints" or "delay
> constraints check" type command. What am I missing? Thanks for any
> advice.

We have all sorts of constraints and foreign keys and we have never had any
problem with pg_restore related to dumping such that foreign keys are
satisfied. You must have data already in the database that violates the
restraints. You can restore in two phases; that is, by restoring the schema,
and then the data using --disable-triggers. I'm assuming you are doing a
binary dump. See the man page for pg_restore.

HTH
--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
2251 Jesse Jewell Pkwy NE
Gainesville, GA 30501
Tel: (336) 372-6812 Fax: (336) 372-6812 Cell: (336) 404-6987
terry@turbocorp.com
www.turbocorp.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] [GENERAL] Dumping/Restoring with constraints?

On Wed, Aug 27, 2008 at 3:19 PM, Phoenix Kiula <phoenix.kiula@gmail.com> wrote:
> Hello. I have a database dump. With data and schema, which includes
> all the constraints and rules. But it seems the pg_dumpall command
> does not copy data in such a way that foreign keys are satisfied upon
> restoring. Because tables are inter-related, importing them keep
> giving errors and eventually no data is imported. Neither pg_dumpall
> nor pg_restore seems to have a "without constraints" or "delay
> constraints check" type command. What am I missing? Thanks for any
> advice.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Isn't the

--disable-triggers

working for you?

--
Marco Bizzarri
http://iliveinpisa.blogspot.com/

--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

Re: [PERFORM] select on 22 GB table causes "An I/O error occured while sending to the backend." exception

Maybe strace could help you find the problem, but could cause a great
overhead...

"Bill Moran" <wmoran@collaborativefusion.com> escreveu:
> ...
--
<span style="color: #000080">Daniel Cristian Cruz
</span>Administrador de Banco de Dados
Direção Regional - Núcleo de Tecnologia da Informação
SENAI - SC
Telefone: 48-3239-1422 (ramal 1422)


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[GENERAL] Dumping/Restoring with constraints?

Hello. I have a database dump. With data and schema, which includes
all the constraints and rules. But it seems the pg_dumpall command
does not copy data in such a way that foreign keys are satisfied upon
restoring. Because tables are inter-related, importing them keep
giving errors and eventually no data is imported. Neither pg_dumpall
nor pg_restore seems to have a "without constraints" or "delay
constraints check" type command. What am I missing? Thanks for any
advice.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [ADMIN] [GENERAL] PITR - base backup question

I have what I have sometimes called a 'tepid spare' backup.  Once a week I copy the physical files over to another system (actually to two of them) and every few hours I make sure the archived WAL log files are in sync (using rsync.)

Anyway, here's the cookbook guide I wrote for updating one of the backup servers, which I have used several times, so I don't have to rediscover the steps each time.  I think it has sufficient detail that someone other than me could follow them.  Obviously it would have to be changed to apply to your file organization scheme.

Enjoy!
--
Mike Nolan


How to restore the PostgreSQL low level backup tar files and WAL files
to do a point-in-time recovery (PITR) files on a backup server.

This note will explain how to perform a point-in-time recovery of the
PostgreSQL database using the low-level backup files and the archived
WAL (write-ahead log) files.  (These steps should work on either server, with any differences in actual file locations dealt with using symbolic links.)

The low level backup files and WAL log files should already be present
on the backup servers.  The low level tar files files are copied when they
are created (usually early on a Tuesday morning each week) and the WAL log
files are rsync'ed to both servers every few hours so that both backup
servers should have files enabling them to be restored to a state that is
no more than a few hours out of sync with the live database server.  (In
the future, we hope to keep one of the backup servers in full real-time
synchronization with the live server, using a different technique.)

The steps given below will generally take 3-5 hours to run, depending
on how many archived log files need to be processed when the PostgreSQL
server is restarted.

These instructions assume some familiarity with Unix/Linux system
administration tools, including the vi editor, and with database
administration procedures, though not necessarily detailed knowledge
of PostgreSQL.

For more details, see chapter 23 of the PostgreSQL documentation,
especially section 23.3.  The documentation for version 8.2 is at
http://www.postgresql.org/docs/8.2/static/backup.html


1.  su to root on the backup server that the restore will be performed on

2.  su to postgres (su - postgres)

3.  Shut down the Postgresql server running on the backup server, if any
         pg_ctl stop
    (Use 'ps ax' to make sure the server is stopped.)


4.  cd to /disk1/postgres/data and copy two files to /tmp

     cd /disk1/postgres/data
     cp pg_hba.conf /tmp
     cp recovery.done /tmp

5.  Delete the entire contents of the /disk1/postgres/data directory tree.
    MAKE ABSOLUTELY SURE YOU ARE IN THE /disk1/postgres/data directory!

     cd /disk1/postgres/data
     pwd
     rm -rf *

6.  Restore the tar file for the low-level backup from the live server

    tar xvf /usr/local/pgsql/tardir/pg_part1.tar

    (This restore may take 2-3 hours)

7.  Remove the PostgreSQL log file and the WAL files that were restored
    from the tar archive
  
    rm log.out
    cd pg_xlog
    rm 00*

    (do not remove the archive_status subdirectory)

8.  Copy the pg_hba.conf file back from the /tmp directory

    cd /disk1/postgres/data
    cp /tmp/pg_hba.conf .

    (This is necessary to keep this database server configured for restricted
    access.  If more expansive access is needed the restored pg_hba.conf
    file may be more appropriate.)

9. Copy the recovery configuration file from the /tmp directory (changing
    the file name)

   cp /tmp/recovery.done recovery.conf

   (This file may be edited to do a point-in-time recovery other than
   to the end of the PITR log files, see the PostgreSQL documentation for
   details.) 

10. Delete the entire contents of the /disk2/postgres/data directory tree.
    MAKE ABSOLUTELY SURE YOU ARE IN THE /disk2/postgres/data directory!

     cd /disk2/postgres/data
     pwd
     rm -rf *

11. Restore the 2nd tar archive

    tar xvf /usr/local/pgsql/tardir/pg_part2.tar

    (This restore may take around a half hour)

12. Go to the directory where the WAL files have been archived on
    the server and remove all files older than the file matching the
    last .backup file.  The fastest way to do this is as follows:

    cd /usr/local/pgsql/archivedir
    ls -1 > files

    This will produce a file listing all files in the archivedir directory.
   
    Now edit that file (using vi) to take out all lines after the
    latest 'backup' file (including the file just ahead of that backup
    file, since it is actually the first file that needs to be processed.
    (Note:  These ones after the latest backup file are the files we're
    NOT going to delete, so we don't want those file names in this text file.)

    For example, the directory list may look like this:

       00000001000000960000000A
       00000001000000960000000B
       00000001000000960000000C
       00000001000000960000000C.004027C0.backup
       00000001000000960000000D

    There will generally be several files with the name ending in 'backup',
    find the most recent one (ie, the last one in the file.)

    In this case, the fie with the name ending in '000C' needs to be
    restored, along with all subsequent files, so in this case we would
    delete all lines from the one ending in '000C' to the end of the edit
    buffer, then edit the lines to change the lines into 'rm' commands,
    ie, something like this:

       rm 00000001000000960000000A
       rm 00000001000000960000000B


    Save the edited file, then execute it

       sh -x files

13. You are now ready to restart PostgreSQL and have it process all the
    archived log files:

    pg_ctl -l /usr/local/pgsql/data/log.out start

14. While the WAL files are being processed, a 'ps -ax' display will
    have lines in it that look something like this:

       28039 pts/0    S      0:00 /disk1/postgres/pgsql/bin/postgres
       28040 ?        Ds     0:00 postgres: startup process

    You can also look at the /usr/local/pgsql/data/log.out file to see
    which WAL file it is currently processing:

    tail /usr/local/pgsql/data/log.out

    Once all the WAL files have been processed (which could take an
    hour or longer, depending on how many WAL files need to be processed,
    'ps -ax' will look more like this:

       28039 pts/0    S      0:00 /disk1/postgres/pgsql/bin/postgres
       28310 ?        Ss     0:00 postgres: writer process
       28311 ?        Ss     0:00 postgres: archiver process
       28312 ?        Ss     0:00 postgres: stats collector process
 
    Once this happens, the backup database server is ready for use. 
    However, it will only be as current as the most recent WAL file that
    was processed, and in order to bring it to a more current state all of
    the steps given above have to be performed again.




Re: [BUGS] BUG #4381: Postgresql daemon won't stay in the foreground

I'm starting it with "postgres -D <datadir>", and it immediately forks
into the background....

Is there something I can do to trap some debug output for you?

Heikki Linnakangas wrote:
> Tom wrote:
>> I want to run postgres under daemontools, but no matter what I do,
>> postgres
>> will not stay in the foreground... has anyone else seen this or know
>> how to
>> fix it?
>
> How are you starting it then? Just run "postgres -D <datadir>", and it
> will stay in foreground.
>
> PS. this is unlikely a bug. Please use the pgsql-general mailing list
> next time.
>


--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgsql-es-ayuda] Error en instalación 8.3

José Antonio,

yo soy uno de los que reportó el mismo error que tu tienes y jamás
recibí respuesta de nadie, por lo que tuvimos que desechar a
Postgresql como la base de datos que usamos en un proyecto.

Espero que tu pregunta esta vez si tenga una respuesta y una solución.

atte.

El día 24 de agosto de 2008 12:56, Jose Antonio Pons
<jose.a.pons@gmail.com> escribió:
> Saludos a la lista.
>
> Desde hace algún tiempo tenía instalado PostgreSQL 8.2.0 en un Windows
> Server 2003 R2 SP2, para correr sobre él un ERP que estoy testeando. Para
> probar si este software aceptaba también correr sobre 8.3, he procedido a
> desinstalar en el entorno de pruebas e instalar de nuevo.
>
> En resumen, el problema con el que he topado es el siguiente: Se lanza la
> instalación desde desde una sesión de usuario con derechos totales
> (Administrador). Cuando se está completando la instalación e intenta iniciar
> la BD, la operación se aborta ("Rolling back action") y se cancela la copia
> de archivos y el resto de configuraciones; finalmente aparece este mensaje
> de "FatalError": "Installation ended prematurely because of an error". Sólo
> permanece el usuario del sistema creado para la instalación.
> Lo he intentando con las tres versiones de la serie (8.3.0, 8.3.1 y 8.3.3)
> con el mismo resultado. Para descartar cualquier interferencia de
> instalaciones anteriores he formateado y reinstalado el equipo desde cero,
> pero el error se repite.
> Al final, he conseguido instalar sin problemas la versión 8.2.9.
> Este error se repite también en el proceso de instalación en otro equipo con
> Windows XP SP2.
>
> Buscando información en los mensajes de la lista, he encontrado que el
> problema ya se citaba en los posts siguientes y que se quedaron sin
> solución:
>
> http://archives.postgresql.org/pgsql-es-ayuda/2008-02/msg00524.php
> http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00363.php
> http://archives.postgresql.org/pgsql-es-ayuda/2008-03/msg00482.php
> http://archives.postgresql.org/pgsql-es-ayuda/2008-06/msg00310.php
>
> Agradeceré cualquier información sobre el tema.
> Saludos.
>
> José Antonio Pons
>
>
>
> Textos de los dos avisos de error que aparecen en el visor de sucesos:
>
> Tipo de suceso: Información
> Origen del suceso: MsiInstaller
> Categoría del suceso: Ninguno
> Id. suceso: 11708
> Fecha: 21/08/2008
> Hora: 18:14:05
> Usuario: NET001P01\Administrador
> Equipo: NET001P01
> Descripción:
> Product: PostgreSQL 8.3 -- Installation failed.
> Datos:
> 0000: 7b 42 38 32 33 36 33 32 {B823632
> 0008: 46 2d 33 42 37 32 2d 34 F-3B72-4
> 0010: 35 31 34 2d 38 38 36 31 514-8861
> 0018: 2d 42 39 36 31 43 45 32 -B961CE2
> 0020: 36 33 32 32 34 7d 63224}
>
> Tipo de suceso: Información
> Origen del suceso: MsiInstaller
> Categoría del suceso: Ninguno
> Id. suceso: 11708
> Fecha: 21/08/2008
> Hora: 18:11:42
> Usuario: NET001P01\Administrador
> Equipo: NET001P01
> Descripción:
> Product: PostgreSQL MUI Wrapper 8.3 -- Installation failed.
> Datos:
> 0000: 7b 34 41 46 46 42 39 46 {4AFFB9F
> 0008: 38 2d 41 30 37 31 2d 34 8-A071-4
> 0010: 35 44 45 2d 42 36 35 39 5DE-B659
> 0018: 2d 36 45 32 45 34 44 45 -6E2E4DE
> 0020: 32 42 41 42 44 7d 2BABD}
--
TIP 7: no olvides aumentar la configuración del "free space map"

Re: [PERFORM] control the number of clog files and xlog files

Duan Ligong wrote:
> Alvaro, Thanks for your answer.
> It would be very helpful.
>
> > > Would you like to be so kind as to answer the following questions:
> > >
> > > - Is there any way to control the number of clog files and xlog files?
> > > I encounter an issue that there are too many clog files under the
> > > pg_clog/ directory which occupy more space than I can endure..
> >
> > pg_clog files are controlled by tuple freezing, which is done by vacuum,
> > and it depends on the autovacuum_min_freeze_age parameter and
> > vacuum_freeze_min_age. Please read
>
> So can we reduce the number of clog by increasing the
> autovacuum_min_freeze_age parameter and vacuum_freeze_min_age
> ?

Yes, but decreasing the value.

Sorry, you ask more questions that I have time to answer right now.

> I encoutered a scenario that there are many files and some of them
> are as old as one month ago. Does all these files including the
> old files are still useful for postgresql? and when will they deleted
> or rotated? Or should they be deleted and maintained by external
> programs?

Yes, those files are still useful. They will be deleted eventually.

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: [HACKERS] TODO <-> Commitfest

David Fetter wrote:
> On Tue, Aug 26, 2008 at 09:40:26PM -0400, Alvaro Herrera wrote:

> > Huh, what's a bidirectional link in this context?
> >
> > I think both the Commitfest and Todo are just ordered collections of
> > pointers to the archives,
>
> For example, Common Table Expressions is both on the TODO list and on
> September's Commitfest. They should probably point to each other so
> long as such a relationship exists.

Hmm, you are right; however it makes me uneasy to given committers the
burden of keeping both things updated (on top of all the other things
they already do).

On the other hand, maybe it will be less of a problem if we have links
on both to each other, so that we know we can mark the Todo as done.
We've seen those forgotten.

(Actually what this says is that we need a single link: from the
Commitfest to Todo. Linking the other way around is probably not as
useful.)

--
Alvaro Herrera http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [PERFORM] select on 22 GB table causes "An I/O error occured while sending to the backend." exception

In response to henk de wit <henk53602@hotmail.com>:

> > What do your various logs (pgsql, application, etc...) have to say?
>
> There
> is hardly anything helpful in the pgsql log. The application log
> doesn't mention anything either. We log a great deal of information in
> our application, but there's nothing out of the ordinary there,
> although there's of course always a chance that somewhere we missed
> something.

There should be something in a log somewhere. Someone suggested the oom
killer might be getting you, if so there should be something in one of
the system logs.

If you can't find anything, then you need to beef up your logs. Try
increasing the amount of stuff that gets logged by PG by tweaking the
postgres.conf settings. Then run iostat, vmstat and top in an endless
loop dumping their output to files (recommend you run date(1) in between
each run, otherwise you can't correlate the output to the time of
occurrence ;)

While you've got all this extra logging going and you're waiting for the
problem to happen again, do an audit of your postgres.conf settings for
memory usage and see if they actually add up. How much RAM does the
system have? How much of it is free? How much of that are you eating
with shared_buffers? How much sort_mem did you tell PG it has? Have
you told PG that it has more memory than the machine actually has?

I've frequently recommended installing pg_buffercache and using mrtg
or something similar to graph various values from it and other easily
accessible statistics in PG and the operating system. The overhead of
collecting and graphing those values is minimal, and having the data
from those graphs can often be the little red arrow that points you to
the solution to problems like these. Not to mention the historical
data generally tells you months ahead of time when you're going to
need to scale up to bigger hardware.

On a side note, what version of PG are you using? If it was in a
previous email, I missed it.

Hope this helps.

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[BUGS] BUG #4382: there is an exagerate increase of execution times putting a 'case' condition in this update

The following bug has been logged online:

Bug reference: 4382
Logged by: Gianluca Buttarelli
Email address: gianluca.buttarelli@business-partner.it
PostgreSQL version: 8.1.11
Operating system: linux
Description: there is an exagerate increase of execution times
putting a 'case' condition in this update
Details:

update accounts_changing set closed_sc_nowcurrent_nbr = (select count (*)
from crm_shopping_carts ,qab_wf_states
where accounts_changing.account_id=crm_shopping_carts.agency_id
and qab_wf_states.state_code in ('SC-FAT1','SC-FAT2') and
crm_shopping_carts.state_id = qab_wf_states.state_id and
(case when ( CURRENT_DATE > date(EXTRACT(YEAR FROM
CURRENT_DATE)||'-10-31') and CURRENT_DATE < date(EXTRACT(YEAR FROM
CURRENT_DATE)||'-12-31'))
then (crm_shopping_carts.creation_date between date((EXTRACT(YEAR FROM
CURRENT_DATE))||'-11-01') and CURRENT_DATE)
else (crm_shopping_carts.creation_date between date((EXTRACT(YEAR FROM
CURRENT_DATE)-1)||'-11-01')and CURRENT_DATE) end ))
where active_indcr='1'

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [GENERAL] loop vs. aggregate was: update and group by/aggregate

On Wed, 27 Aug 2008 12:45:42 +0100
Sam Mason <sam@samason.me.uk> wrote:

> On Wed, Aug 27, 2008 at 01:19:30PM +0200, Ivan Sergio Borgonovo
> wrote:
> > but this looks much slower than the function:
> > function: 113sec
> > vs.
> > single statement: 488sec
> > I repeated the test 3 times with similar results.
>
> Have you been vacuuming (non-full) between runs? and as always,
> are the stats reasonably up to date?

there is autovacuum running regularly and I:
vacuumed full, analyze and reindexed everything recently:
that means that all changes to DB were done BEFORE maintenance stuff
but then I executes several selects on tables (including the ones
involved).
I tried to run the function and the statement 3 times one after the
other... so they should have been in similar situation.

> > Can anybody explain why aggregates under perform so badly?

> You could try just running the SELECT part to see how long the
> aggregation takes.

33sec
Surprising.

> > I just read that most of the times I can't be smarter than the
> > planner and I thought that this would be one of the
> > circumstances theplanner could outperform my handwritten
> > function.

> It's quite easy to outsmart the planner with large amounts of
> data, but it's surprising how well it does most of the time.
> Generally you can just write whatever is obvious and the planer
> will do something good with it. If it doesn't do the right thing
> then you can worry about performance, rather than most languages
> where you have to worry about performance from the start.

I really thought that in this case the planner was going to outsmart
me since well I think in that statement it could see a lot more
optimisation than me knowing the nature of the data.

> > here is the explain:
> > "Hash Join (cost=137408.51..271599.78 rows=209674 width=221)"
> > " Hash Cond: ("outer".itemid = "inner".itemid)"
> > " -> HashAggregate (cost=32994.81..36664.11 rows=209674
> > width=58)" " -> Hash Join (cost=8544.62..31946.44
> > rows=209674 width=58)" " Hash Cond:
> > ("outer".authorid = "inner".authorid)" " -> Seq
> > Scan on catalog_itemauthor ia (cost=0.00..10297.21 rows=629021
> > width=12)" " -> Hash (cost=8309.00..8309.00
> > rows=94248 width=54)" " -> Seq Scan on
> > catalog_author a (cost=0.00..8309.00 rows=94248 width=54)"
> > " Filter: ((name IS NOT NULL) AND
> > (length(btrim((name)::text, E' \011'::text)) > 1))" " -> Hash
> > (cost=79538.96..79538.96 rows=833496 width=189)" " ->
> > Seq Scan on catalog_items (cost=0.00..79538.96 rows=833496
> > width=189)"
>
> The seqscan of catalog_items looks a little strange to me, have
> you set work_mem to some obscenely big value?

32Mb
I think postgresql users and image would greatly benefit from some
sort of tool that gives suggestions on how to set postgresql.conf
accordingly to the hardware... and maybe asking some questions to
the user.
Tom suggested to raise that number in the range of tens of Mb for
another problem. I saw the hot spot was at 16Mb and considering I
was expecting the DB to get bigger I increased it to 32Mb.

> > Sam... I did your same error the first time I wrote the above
> > statement... missing the where clause but I can't understand the
> > need for your longer version "to ensure that characters trimmed
> > from the authors' name are consistent.

> You're trimming an extra close square bracket (']') in one of the
> trim statements and not in the other. If you just do it in one
> place then you don't have to worry about inconsistency.

Guess: that was a typo. regexp brain context switching :)

> > I prefer to put constraint on the length of varchar as an early
> > warning for something that is unexpected.
> > eg. if the loop was not working as expected I'd get an error.
> > Infact that's exactly what happened during development of the
> > above function.

> OK, Knuth generally seemed to recommend doing similar things. Most
> modern programming people tend to advocate removing artificial
> limits as some fundamental good. I'm never quite so sure, either
> way!

I do know Knuth and I think I share his opinions. I don't know
"modern programming people" and I'm alien to the concept of
"fundamental good".
But well, I'm here to learn. Could you point me to some explanation
on why it should be a "fundamental good" in DB context?
I'd say that:
a) you may discover that your forecast of upper limit was wrong and
you could see things breaking suddenly
b) you incur in a small overhead (is it?)

I do think that most of the time it is worth (and part of the
problem) to make a reasonable forecast. If I can't really make one
or it requires too much effort to make a reasonable one at least I
start developing with an "unreasonable" constraint just to catch some
errors earlier, drop the constraint and leave a comment.

--
Ivan Sergio Borgonovo
http://www.webthatworks.it


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [BUGS] BUG #4381: Postgresql daemon won't stay in the foreground

Tom wrote:
> I want to run postgres under daemontools, but no matter what I do, postgres
> will not stay in the foreground... has anyone else seen this or know how to
> fix it?

How are you starting it then? Just run "postgres -D <datadir>", and it
will stay in foreground.

PS. this is unlikely a bug. Please use the pgsql-general mailing list
next time.

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Re: [pgsql-fr-generale] [résolu]postgresql php et Insert

BPascal a écrit :

Un problème de syntaxe (et un code pas très joli à corriger).

Merci à tous de votre aide.
Je n'hésiterais pas à vous reposer des questions.

Cordialement
--
Pascal


--
Sent via pgsql-fr-generale mailing list (pgsql-fr-generale@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-fr-generale